Feb 18 00:34:05 crc systemd[1]: Starting Kubernetes Kubelet... Feb 18 00:34:05 crc restorecon[4682]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:05 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 00:34:06 crc restorecon[4682]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 00:34:06 crc restorecon[4682]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 18 00:34:07 crc kubenswrapper[4858]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 00:34:07 crc kubenswrapper[4858]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 18 00:34:07 crc kubenswrapper[4858]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 00:34:07 crc kubenswrapper[4858]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 00:34:07 crc kubenswrapper[4858]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 18 00:34:07 crc kubenswrapper[4858]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.159607 4858 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.164838 4858 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.164867 4858 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.164878 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.164886 4858 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.164896 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.164904 4858 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.164912 4858 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.164921 4858 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.164929 4858 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.164937 4858 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.164947 4858 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.164956 4858 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.164965 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.164974 4858 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.164984 4858 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.164991 4858 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.164999 4858 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165009 4858 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165018 4858 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165026 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165034 4858 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165043 4858 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165051 4858 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165059 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165067 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165074 4858 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165082 4858 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165090 4858 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165098 4858 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165105 4858 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165114 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165121 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165132 4858 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165142 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165151 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165160 4858 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165169 4858 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165177 4858 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165184 4858 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165193 4858 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165200 4858 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165208 4858 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165216 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165223 4858 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165230 4858 feature_gate.go:330] unrecognized feature gate: Example Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165238 4858 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165248 4858 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165258 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165267 4858 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165275 4858 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165282 4858 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165290 4858 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165298 4858 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165305 4858 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165313 4858 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165321 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165329 4858 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165337 4858 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165344 4858 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165352 4858 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165360 4858 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165368 4858 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165376 4858 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165383 4858 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165391 4858 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165400 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165408 4858 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165415 4858 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165423 4858 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165431 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.165438 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166368 4858 flags.go:64] FLAG: --address="0.0.0.0" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166392 4858 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166406 4858 flags.go:64] FLAG: --anonymous-auth="true" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166418 4858 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166428 4858 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166438 4858 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166449 4858 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166460 4858 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166469 4858 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166478 4858 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166488 4858 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166526 4858 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166536 4858 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166545 4858 flags.go:64] FLAG: --cgroup-root="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166554 4858 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166563 4858 flags.go:64] FLAG: --client-ca-file="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166573 4858 flags.go:64] FLAG: --cloud-config="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166582 4858 flags.go:64] FLAG: --cloud-provider="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166590 4858 flags.go:64] FLAG: --cluster-dns="[]" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166601 4858 flags.go:64] FLAG: --cluster-domain="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166609 4858 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166618 4858 flags.go:64] FLAG: --config-dir="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166629 4858 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166638 4858 flags.go:64] FLAG: --container-log-max-files="5" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166649 4858 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166658 4858 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166667 4858 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166677 4858 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166686 4858 flags.go:64] FLAG: --contention-profiling="false" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166694 4858 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166703 4858 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166712 4858 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166721 4858 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166732 4858 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166741 4858 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166749 4858 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166758 4858 flags.go:64] FLAG: --enable-load-reader="false" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166770 4858 flags.go:64] FLAG: --enable-server="true" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166779 4858 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166791 4858 flags.go:64] FLAG: --event-burst="100" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166800 4858 flags.go:64] FLAG: --event-qps="50" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166809 4858 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166818 4858 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166828 4858 flags.go:64] FLAG: --eviction-hard="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166849 4858 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166858 4858 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166866 4858 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166876 4858 flags.go:64] FLAG: --eviction-soft="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166886 4858 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166896 4858 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166905 4858 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166914 4858 flags.go:64] FLAG: --experimental-mounter-path="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166922 4858 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166931 4858 flags.go:64] FLAG: --fail-swap-on="true" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166940 4858 flags.go:64] FLAG: --feature-gates="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166950 4858 flags.go:64] FLAG: --file-check-frequency="20s" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166959 4858 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166968 4858 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166977 4858 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166987 4858 flags.go:64] FLAG: --healthz-port="10248" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.166996 4858 flags.go:64] FLAG: --help="false" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167005 4858 flags.go:64] FLAG: --hostname-override="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167013 4858 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167022 4858 flags.go:64] FLAG: --http-check-frequency="20s" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167032 4858 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167041 4858 flags.go:64] FLAG: --image-credential-provider-config="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167049 4858 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167058 4858 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167066 4858 flags.go:64] FLAG: --image-service-endpoint="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167075 4858 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167084 4858 flags.go:64] FLAG: --kube-api-burst="100" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167093 4858 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167102 4858 flags.go:64] FLAG: --kube-api-qps="50" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167112 4858 flags.go:64] FLAG: --kube-reserved="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167121 4858 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167130 4858 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167139 4858 flags.go:64] FLAG: --kubelet-cgroups="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167147 4858 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167156 4858 flags.go:64] FLAG: --lock-file="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167164 4858 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167174 4858 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167184 4858 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167196 4858 flags.go:64] FLAG: --log-json-split-stream="false" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167205 4858 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167214 4858 flags.go:64] FLAG: --log-text-split-stream="false" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167223 4858 flags.go:64] FLAG: --logging-format="text" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167231 4858 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167241 4858 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167249 4858 flags.go:64] FLAG: --manifest-url="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167258 4858 flags.go:64] FLAG: --manifest-url-header="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167269 4858 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167278 4858 flags.go:64] FLAG: --max-open-files="1000000" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167289 4858 flags.go:64] FLAG: --max-pods="110" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167297 4858 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167306 4858 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167316 4858 flags.go:64] FLAG: --memory-manager-policy="None" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167325 4858 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167334 4858 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167343 4858 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167351 4858 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167370 4858 flags.go:64] FLAG: --node-status-max-images="50" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167380 4858 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167389 4858 flags.go:64] FLAG: --oom-score-adj="-999" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167398 4858 flags.go:64] FLAG: --pod-cidr="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167406 4858 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167418 4858 flags.go:64] FLAG: --pod-manifest-path="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167427 4858 flags.go:64] FLAG: --pod-max-pids="-1" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167436 4858 flags.go:64] FLAG: --pods-per-core="0" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167445 4858 flags.go:64] FLAG: --port="10250" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167454 4858 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167464 4858 flags.go:64] FLAG: --provider-id="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167473 4858 flags.go:64] FLAG: --qos-reserved="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167481 4858 flags.go:64] FLAG: --read-only-port="10255" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167491 4858 flags.go:64] FLAG: --register-node="true" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167524 4858 flags.go:64] FLAG: --register-schedulable="true" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167533 4858 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167548 4858 flags.go:64] FLAG: --registry-burst="10" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167557 4858 flags.go:64] FLAG: --registry-qps="5" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167566 4858 flags.go:64] FLAG: --reserved-cpus="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167574 4858 flags.go:64] FLAG: --reserved-memory="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167585 4858 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167594 4858 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167603 4858 flags.go:64] FLAG: --rotate-certificates="false" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167612 4858 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167621 4858 flags.go:64] FLAG: --runonce="false" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167629 4858 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167639 4858 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167648 4858 flags.go:64] FLAG: --seccomp-default="false" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167657 4858 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167666 4858 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167675 4858 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167685 4858 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167694 4858 flags.go:64] FLAG: --storage-driver-password="root" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167703 4858 flags.go:64] FLAG: --storage-driver-secure="false" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167711 4858 flags.go:64] FLAG: --storage-driver-table="stats" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167720 4858 flags.go:64] FLAG: --storage-driver-user="root" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167729 4858 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167738 4858 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167747 4858 flags.go:64] FLAG: --system-cgroups="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167756 4858 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167770 4858 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167778 4858 flags.go:64] FLAG: --tls-cert-file="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167787 4858 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167798 4858 flags.go:64] FLAG: --tls-min-version="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167807 4858 flags.go:64] FLAG: --tls-private-key-file="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167817 4858 flags.go:64] FLAG: --topology-manager-policy="none" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167826 4858 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167835 4858 flags.go:64] FLAG: --topology-manager-scope="container" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167844 4858 flags.go:64] FLAG: --v="2" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167855 4858 flags.go:64] FLAG: --version="false" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167866 4858 flags.go:64] FLAG: --vmodule="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167876 4858 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.167886 4858 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168079 4858 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168090 4858 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168099 4858 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168107 4858 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168117 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168125 4858 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168134 4858 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168142 4858 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168150 4858 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168159 4858 feature_gate.go:330] unrecognized feature gate: Example Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168168 4858 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168176 4858 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168185 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168193 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168201 4858 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168212 4858 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168222 4858 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168231 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168240 4858 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168254 4858 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168263 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168271 4858 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168278 4858 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168286 4858 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168294 4858 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168303 4858 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168310 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168318 4858 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168326 4858 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168334 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168341 4858 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168349 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168357 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168364 4858 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168372 4858 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168380 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168388 4858 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168395 4858 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168403 4858 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168410 4858 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168419 4858 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168427 4858 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168435 4858 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168442 4858 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168449 4858 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168459 4858 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168468 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168477 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168486 4858 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168517 4858 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168526 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168537 4858 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168545 4858 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168553 4858 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168561 4858 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168568 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168581 4858 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168591 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168601 4858 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168611 4858 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168620 4858 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168628 4858 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168637 4858 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168645 4858 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168654 4858 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168661 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168669 4858 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168677 4858 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168684 4858 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168692 4858 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.168700 4858 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.168722 4858 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.180701 4858 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.180744 4858 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.180874 4858 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.180885 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.180894 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.180904 4858 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.180912 4858 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.180920 4858 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.180928 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.180936 4858 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.180944 4858 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.180952 4858 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.180960 4858 feature_gate.go:330] unrecognized feature gate: Example Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.180968 4858 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.180976 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.180984 4858 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.180992 4858 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181000 4858 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181008 4858 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181015 4858 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181024 4858 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181033 4858 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181041 4858 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181048 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181056 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181064 4858 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181075 4858 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181086 4858 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181097 4858 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181106 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181115 4858 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181123 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181131 4858 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181139 4858 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181147 4858 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181154 4858 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181162 4858 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181170 4858 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181177 4858 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181185 4858 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181192 4858 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181200 4858 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181208 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181215 4858 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181223 4858 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181230 4858 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181238 4858 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181246 4858 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181253 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181261 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181269 4858 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181276 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181284 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181291 4858 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181299 4858 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181307 4858 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181314 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181324 4858 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181332 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181340 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181350 4858 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181359 4858 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181367 4858 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181375 4858 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181383 4858 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181390 4858 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181424 4858 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181433 4858 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181441 4858 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181449 4858 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181457 4858 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181464 4858 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181472 4858 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.181484 4858 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181778 4858 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181791 4858 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181801 4858 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181812 4858 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181820 4858 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181829 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181837 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181846 4858 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181854 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181864 4858 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181873 4858 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181881 4858 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181889 4858 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181897 4858 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181905 4858 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181913 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181921 4858 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181929 4858 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181937 4858 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181946 4858 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181954 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181962 4858 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181969 4858 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181977 4858 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181985 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.181993 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182000 4858 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182008 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182015 4858 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182023 4858 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182031 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182039 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182046 4858 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182054 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182062 4858 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182070 4858 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182078 4858 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182086 4858 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182093 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182101 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182109 4858 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182117 4858 feature_gate.go:330] unrecognized feature gate: Example Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182124 4858 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182132 4858 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182139 4858 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182147 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182155 4858 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182164 4858 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182174 4858 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182182 4858 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182193 4858 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182203 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182212 4858 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182221 4858 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182229 4858 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182240 4858 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182250 4858 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182259 4858 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182267 4858 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182275 4858 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182282 4858 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182291 4858 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182298 4858 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182308 4858 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182318 4858 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182326 4858 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182335 4858 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182343 4858 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182352 4858 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182360 4858 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.182367 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.182380 4858 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.183591 4858 server.go:940] "Client rotation is on, will bootstrap in background" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.188980 4858 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.189111 4858 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.190956 4858 server.go:997] "Starting client certificate rotation" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.191007 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.192058 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-12 05:38:58.81697494 +0000 UTC Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.192259 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.223828 4858 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.227796 4858 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 00:34:07 crc kubenswrapper[4858]: E0218 00:34:07.231010 4858 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.244136 4858 log.go:25] "Validated CRI v1 runtime API" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.284097 4858 log.go:25] "Validated CRI v1 image API" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.286570 4858 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.291447 4858 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-18-00-29-39-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.291544 4858 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.319654 4858 manager.go:217] Machine: {Timestamp:2026-02-18 00:34:07.317084324 +0000 UTC m=+0.622921096 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:9d2e5599-fe23-41b1-a47a-55e31a585d4f BootID:6349ead0-20de-4c0d-9a78-8877524d5e2e Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:cc:52:46 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:cc:52:46 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d4:64:33 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:f7:87:10 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:92:ad:15 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:91:5b:d1 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:9a:f5:c0:f3:a5:60 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:76:3d:47:d8:da:4d Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.320058 4858 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.320357 4858 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.321813 4858 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.322103 4858 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.322156 4858 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.322403 4858 topology_manager.go:138] "Creating topology manager with none policy" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.322416 4858 container_manager_linux.go:303] "Creating device plugin manager" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.323110 4858 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.323150 4858 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.323417 4858 state_mem.go:36] "Initialized new in-memory state store" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.323528 4858 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.326962 4858 kubelet.go:418] "Attempting to sync node with API server" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.326992 4858 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.327010 4858 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.327025 4858 kubelet.go:324] "Adding apiserver pod source" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.327038 4858 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.331274 4858 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.331940 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Feb 18 00:34:07 crc kubenswrapper[4858]: E0218 00:34:07.332020 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.335456 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Feb 18 00:34:07 crc kubenswrapper[4858]: E0218 00:34:07.335610 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.337704 4858 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.341897 4858 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.343180 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.343215 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.343226 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.343237 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.343254 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.343264 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.343272 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.343288 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.343303 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.343313 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.343328 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.343339 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.344588 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.345186 4858 server.go:1280] "Started kubelet" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.345363 4858 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.345656 4858 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.346449 4858 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 18 00:34:07 crc systemd[1]: Started Kubernetes Kubelet. Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.347386 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.350034 4858 server.go:460] "Adding debug handlers to kubelet server" Feb 18 00:34:07 crc kubenswrapper[4858]: E0218 00:34:07.351653 4858 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.12:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18953009dc11af2e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:34:07.345143598 +0000 UTC m=+0.650980340,LastTimestamp:2026-02-18 00:34:07.345143598 +0000 UTC m=+0.650980340,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.357744 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.357853 4858 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.357935 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 21:28:03.716708656 +0000 UTC Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.357988 4858 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 18 00:34:07 crc kubenswrapper[4858]: E0218 00:34:07.358027 4858 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.358046 4858 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.358020 4858 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.358817 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Feb 18 00:34:07 crc kubenswrapper[4858]: E0218 00:34:07.358915 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.359200 4858 factory.go:55] Registering systemd factory Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.359250 4858 factory.go:221] Registration of the systemd container factory successfully Feb 18 00:34:07 crc kubenswrapper[4858]: E0218 00:34:07.359249 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="200ms" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.359631 4858 factory.go:153] Registering CRI-O factory Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.359730 4858 factory.go:221] Registration of the crio container factory successfully Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.359882 4858 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.359993 4858 factory.go:103] Registering Raw factory Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.360085 4858 manager.go:1196] Started watching for new ooms in manager Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.360879 4858 manager.go:319] Starting recovery of all containers Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.375593 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.375863 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.375963 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.376060 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.376173 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.376264 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.376387 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.376487 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.376602 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.376716 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.376819 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.376905 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.376983 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.377065 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.377157 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.377232 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.377303 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.377374 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.377445 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.377557 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.378171 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.378273 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.378308 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.378334 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.378360 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.378387 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.378420 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.378451 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.378477 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.378557 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.378589 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.378617 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.378642 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.378688 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.378717 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.378743 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.378769 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.378797 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.378825 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.378850 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.378874 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.378902 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.378927 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.378951 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.378977 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.379001 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.379028 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.379056 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.379082 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.379107 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.379131 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.379156 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.379192 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.379226 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.379331 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.379372 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.379400 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.379427 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.379454 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.379481 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.379543 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.379575 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.379604 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.379629 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.379700 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.379732 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.379760 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.379789 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.379818 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.381710 4858 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.381779 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.381815 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.381879 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.381907 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.381936 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.381961 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.381988 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382014 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382042 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382071 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382098 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382126 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382155 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382183 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382236 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382269 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382295 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382320 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382345 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382369 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382394 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382420 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382445 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382471 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382547 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382579 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382608 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382632 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382658 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382684 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382708 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382742 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382767 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382792 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382819 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382857 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382887 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382912 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382941 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382967 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.382998 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383027 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383057 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383085 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383114 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383142 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383168 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383198 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383224 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383248 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383276 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383303 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383329 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383355 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383382 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383405 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383428 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383454 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383477 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383540 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383568 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383594 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383619 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383647 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383674 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383698 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383724 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383751 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383775 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383801 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383824 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383849 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383873 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383900 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383926 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383952 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.383977 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384001 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384026 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384054 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384079 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384102 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384128 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384153 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384178 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384203 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384226 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384251 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384276 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384300 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384325 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384349 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384377 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384402 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384428 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384456 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384483 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384549 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384576 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384601 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384626 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384653 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384771 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384802 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384832 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384859 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384889 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384914 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384942 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384969 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.384996 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385028 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385054 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385080 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385169 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385200 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385228 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385253 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385279 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385304 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385332 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385360 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385387 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385413 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385441 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385471 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385529 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385558 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385581 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385606 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385630 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385658 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385685 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385710 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385737 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385766 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385792 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385817 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385870 4858 reconstruct.go:97] "Volume reconstruction finished" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.385889 4858 reconciler.go:26] "Reconciler: start to sync state" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.395905 4858 manager.go:324] Recovery completed Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.412931 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.413529 4858 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.415704 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.416576 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.416604 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.418073 4858 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.418112 4858 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.418149 4858 kubelet.go:2335] "Starting kubelet main sync loop" Feb 18 00:34:07 crc kubenswrapper[4858]: E0218 00:34:07.418198 4858 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.419421 4858 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.419455 4858 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.419491 4858 state_mem.go:36] "Initialized new in-memory state store" Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.419989 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Feb 18 00:34:07 crc kubenswrapper[4858]: E0218 00:34:07.420071 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.432857 4858 policy_none.go:49] "None policy: Start" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.434072 4858 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.434114 4858 state_mem.go:35] "Initializing new in-memory state store" Feb 18 00:34:07 crc kubenswrapper[4858]: E0218 00:34:07.458538 4858 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.497751 4858 manager.go:334] "Starting Device Plugin manager" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.497877 4858 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.497940 4858 server.go:79] "Starting device plugin registration server" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.498818 4858 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.498879 4858 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.499635 4858 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.499764 4858 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.499802 4858 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 18 00:34:07 crc kubenswrapper[4858]: E0218 00:34:07.505302 4858 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.518572 4858 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.518661 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.520411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.520440 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.520452 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.520612 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.520959 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.521011 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.521610 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.521646 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.521684 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.521988 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.522202 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.522245 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.522265 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.522337 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.522452 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.523906 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.523895 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.523930 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.524050 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.523989 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.524083 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.524299 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.524502 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.524538 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.525787 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.525841 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.525860 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.526058 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.526118 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.526148 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.526472 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.526691 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.526732 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.527794 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.527830 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.527797 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.527892 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.527904 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.527845 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.528186 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.528220 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.529102 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.529126 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.529229 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:07 crc kubenswrapper[4858]: E0218 00:34:07.561334 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="400ms" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.588883 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.588922 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.588943 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.588959 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.588979 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.588996 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.589011 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.589028 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.589043 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.589061 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.589076 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.589132 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.589203 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.589243 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.589273 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.599646 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.601063 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.601097 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.601109 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.601158 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 00:34:07 crc kubenswrapper[4858]: E0218 00:34:07.601647 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.12:6443: connect: connection refused" node="crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.690771 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.690873 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.690926 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.690972 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.691003 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.690999 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.691074 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.691035 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.691010 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.691139 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.691139 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.691185 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.691066 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.691227 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.691269 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.691233 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.691045 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.691317 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.691294 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.691303 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.691330 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.691402 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.691426 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.691472 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.691475 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.691543 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.691256 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.691533 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.691550 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.691603 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.801782 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.803185 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.803228 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.803244 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.803276 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 00:34:07 crc kubenswrapper[4858]: E0218 00:34:07.803760 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.12:6443: connect: connection refused" node="crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.868696 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.875423 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.903301 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.921056 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: I0218 00:34:07.924188 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.924759 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-aaa238fd3e786c7ca14b718e86545ed0a06eaa5010e26e0548ae9798b251518f WatchSource:0}: Error finding container aaa238fd3e786c7ca14b718e86545ed0a06eaa5010e26e0548ae9798b251518f: Status 404 returned error can't find the container with id aaa238fd3e786c7ca14b718e86545ed0a06eaa5010e26e0548ae9798b251518f Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.925391 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-59a838d1b4ff8d323a3fa62ad857f21fcb776a28a7f1f90d1551147c1ee80771 WatchSource:0}: Error finding container 59a838d1b4ff8d323a3fa62ad857f21fcb776a28a7f1f90d1551147c1ee80771: Status 404 returned error can't find the container with id 59a838d1b4ff8d323a3fa62ad857f21fcb776a28a7f1f90d1551147c1ee80771 Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.937331 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-cc2296c36136cbca074cf582f610e55569511bccaa7e7aa72b70d0b35b955156 WatchSource:0}: Error finding container cc2296c36136cbca074cf582f610e55569511bccaa7e7aa72b70d0b35b955156: Status 404 returned error can't find the container with id cc2296c36136cbca074cf582f610e55569511bccaa7e7aa72b70d0b35b955156 Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.939778 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-5547e846bc7ba709ab5d843cd1c5fd85ec7b5a9d62e115bcd02709eae3a52812 WatchSource:0}: Error finding container 5547e846bc7ba709ab5d843cd1c5fd85ec7b5a9d62e115bcd02709eae3a52812: Status 404 returned error can't find the container with id 5547e846bc7ba709ab5d843cd1c5fd85ec7b5a9d62e115bcd02709eae3a52812 Feb 18 00:34:07 crc kubenswrapper[4858]: W0218 00:34:07.947211 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-645f4cb5c8048c7c9433a1f18a70d7f4022b4f528fb7be070e640d93adb9831a WatchSource:0}: Error finding container 645f4cb5c8048c7c9433a1f18a70d7f4022b4f528fb7be070e640d93adb9831a: Status 404 returned error can't find the container with id 645f4cb5c8048c7c9433a1f18a70d7f4022b4f528fb7be070e640d93adb9831a Feb 18 00:34:07 crc kubenswrapper[4858]: E0218 00:34:07.963063 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="800ms" Feb 18 00:34:08 crc kubenswrapper[4858]: I0218 00:34:08.204646 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:08 crc kubenswrapper[4858]: I0218 00:34:08.206669 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:08 crc kubenswrapper[4858]: I0218 00:34:08.206756 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:08 crc kubenswrapper[4858]: I0218 00:34:08.206773 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:08 crc kubenswrapper[4858]: I0218 00:34:08.206818 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 00:34:08 crc kubenswrapper[4858]: E0218 00:34:08.207389 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.12:6443: connect: connection refused" node="crc" Feb 18 00:34:08 crc kubenswrapper[4858]: W0218 00:34:08.233197 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Feb 18 00:34:08 crc kubenswrapper[4858]: E0218 00:34:08.233299 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Feb 18 00:34:08 crc kubenswrapper[4858]: I0218 00:34:08.348232 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Feb 18 00:34:08 crc kubenswrapper[4858]: I0218 00:34:08.358327 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 12:44:13.674492307 +0000 UTC Feb 18 00:34:08 crc kubenswrapper[4858]: I0218 00:34:08.428080 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"645f4cb5c8048c7c9433a1f18a70d7f4022b4f528fb7be070e640d93adb9831a"} Feb 18 00:34:08 crc kubenswrapper[4858]: I0218 00:34:08.429236 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cc2296c36136cbca074cf582f610e55569511bccaa7e7aa72b70d0b35b955156"} Feb 18 00:34:08 crc kubenswrapper[4858]: I0218 00:34:08.430322 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5547e846bc7ba709ab5d843cd1c5fd85ec7b5a9d62e115bcd02709eae3a52812"} Feb 18 00:34:08 crc kubenswrapper[4858]: I0218 00:34:08.431721 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"59a838d1b4ff8d323a3fa62ad857f21fcb776a28a7f1f90d1551147c1ee80771"} Feb 18 00:34:08 crc kubenswrapper[4858]: I0218 00:34:08.432879 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"aaa238fd3e786c7ca14b718e86545ed0a06eaa5010e26e0548ae9798b251518f"} Feb 18 00:34:08 crc kubenswrapper[4858]: W0218 00:34:08.642070 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Feb 18 00:34:08 crc kubenswrapper[4858]: E0218 00:34:08.642203 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Feb 18 00:34:08 crc kubenswrapper[4858]: E0218 00:34:08.765083 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="1.6s" Feb 18 00:34:08 crc kubenswrapper[4858]: W0218 00:34:08.855342 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Feb 18 00:34:08 crc kubenswrapper[4858]: E0218 00:34:08.855447 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Feb 18 00:34:08 crc kubenswrapper[4858]: W0218 00:34:08.946955 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Feb 18 00:34:08 crc kubenswrapper[4858]: E0218 00:34:08.947072 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.007596 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.009682 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.009765 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.009788 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.009831 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 00:34:09 crc kubenswrapper[4858]: E0218 00:34:09.010410 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.12:6443: connect: connection refused" node="crc" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.313610 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 00:34:09 crc kubenswrapper[4858]: E0218 00:34:09.314471 4858 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.349064 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.359158 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 15:36:16.165323797 +0000 UTC Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.438175 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a" exitCode=0 Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.438337 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a"} Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.438353 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.439668 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.439717 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.439733 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.441155 4858 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e365117a2b1431afc9a490c3a9952cef957ceecf8b174dc7e6ea9c8cb2189d0f" exitCode=0 Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.441291 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e365117a2b1431afc9a490c3a9952cef957ceecf8b174dc7e6ea9c8cb2189d0f"} Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.441424 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.442093 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.442640 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.442703 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.442727 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.443227 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.443282 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.443298 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.443708 4858 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="61f7ac7489e07b41d0a77d4812ea728d416a47fa2320ee63fa63de2cff00a520" exitCode=0 Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.443761 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"61f7ac7489e07b41d0a77d4812ea728d416a47fa2320ee63fa63de2cff00a520"} Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.443822 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.445642 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.445681 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.445698 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.446973 4858 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="3b5211dd317867170b65bf5493cb95d099a21e9498ccb2d7da269599defbf1f2" exitCode=0 Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.447058 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"3b5211dd317867170b65bf5493cb95d099a21e9498ccb2d7da269599defbf1f2"} Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.447077 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.451091 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.451141 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.451163 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.454728 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0"} Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.454785 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c"} Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.454812 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0"} Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.454835 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee"} Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.454940 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.456243 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.456280 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.456299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:09 crc kubenswrapper[4858]: I0218 00:34:09.702391 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.348224 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.359260 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 14:13:49.554808604 +0000 UTC Feb 18 00:34:10 crc kubenswrapper[4858]: E0218 00:34:10.365883 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="3.2s" Feb 18 00:34:10 crc kubenswrapper[4858]: W0218 00:34:10.456257 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Feb 18 00:34:10 crc kubenswrapper[4858]: E0218 00:34:10.456343 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.460690 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1bbc915ebeff415c6e1d4888ed16a1eba95ff7420756c40907c71b3652a3a166"} Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.460794 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a8652de78b1e8343e7c323df13f080deb9e31fad243abed2d7059161c69843a0"} Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.460822 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4d2840af40ad5521b99a904966a29d6021b2c6d194521b8a0887e06162ae3ae9"} Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.460717 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.462011 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.462057 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.462104 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.463758 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88"} Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.463806 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716"} Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.463824 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1"} Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.463839 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c"} Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.465821 4858 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="477bef0d404b76f06fbb8519a0e76034ca21b41d27ad448ea8433c2778db64cf" exitCode=0 Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.465923 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.465932 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"477bef0d404b76f06fbb8519a0e76034ca21b41d27ad448ea8433c2778db64cf"} Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.466977 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.467018 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.467031 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.475299 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"aa8023934d83dff5d479338ea0a0f8a97f50b10c8b8ac1fafe3a6f852e442981"} Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.475334 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.475367 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.476544 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.476608 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.476627 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.476649 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.476676 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.476695 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:10 crc kubenswrapper[4858]: W0218 00:34:10.564586 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Feb 18 00:34:10 crc kubenswrapper[4858]: E0218 00:34:10.564677 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.610701 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.611678 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.611725 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.611739 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:10 crc kubenswrapper[4858]: I0218 00:34:10.611767 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 00:34:10 crc kubenswrapper[4858]: E0218 00:34:10.612259 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.12:6443: connect: connection refused" node="crc" Feb 18 00:34:11 crc kubenswrapper[4858]: I0218 00:34:11.189760 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:34:11 crc kubenswrapper[4858]: I0218 00:34:11.359953 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 05:17:11.379467571 +0000 UTC Feb 18 00:34:11 crc kubenswrapper[4858]: I0218 00:34:11.485064 4858 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d6288589b9b7ba4774bd10d024e5acc8b15075ab657008e32ca6ff1bffeae251" exitCode=0 Feb 18 00:34:11 crc kubenswrapper[4858]: I0218 00:34:11.485181 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d6288589b9b7ba4774bd10d024e5acc8b15075ab657008e32ca6ff1bffeae251"} Feb 18 00:34:11 crc kubenswrapper[4858]: I0218 00:34:11.485257 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:11 crc kubenswrapper[4858]: I0218 00:34:11.486428 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:11 crc kubenswrapper[4858]: I0218 00:34:11.486479 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:11 crc kubenswrapper[4858]: I0218 00:34:11.486521 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:11 crc kubenswrapper[4858]: I0218 00:34:11.492982 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167"} Feb 18 00:34:11 crc kubenswrapper[4858]: I0218 00:34:11.493024 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:11 crc kubenswrapper[4858]: I0218 00:34:11.493064 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:11 crc kubenswrapper[4858]: I0218 00:34:11.493141 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:11 crc kubenswrapper[4858]: I0218 00:34:11.493604 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:11 crc kubenswrapper[4858]: I0218 00:34:11.494649 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:11 crc kubenswrapper[4858]: I0218 00:34:11.494865 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:11 crc kubenswrapper[4858]: I0218 00:34:11.494933 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:11 crc kubenswrapper[4858]: I0218 00:34:11.495111 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:11 crc kubenswrapper[4858]: I0218 00:34:11.495142 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:11 crc kubenswrapper[4858]: I0218 00:34:11.494965 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:11 crc kubenswrapper[4858]: I0218 00:34:11.495193 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:11 crc kubenswrapper[4858]: I0218 00:34:11.495222 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:11 crc kubenswrapper[4858]: I0218 00:34:11.495067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:11 crc kubenswrapper[4858]: I0218 00:34:11.495073 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:11 crc kubenswrapper[4858]: I0218 00:34:11.495315 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:11 crc kubenswrapper[4858]: I0218 00:34:11.495336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:11 crc kubenswrapper[4858]: I0218 00:34:11.851265 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:34:12 crc kubenswrapper[4858]: I0218 00:34:12.002608 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:34:12 crc kubenswrapper[4858]: I0218 00:34:12.360100 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 11:05:25.596648413 +0000 UTC Feb 18 00:34:12 crc kubenswrapper[4858]: I0218 00:34:12.360147 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:34:12 crc kubenswrapper[4858]: I0218 00:34:12.500812 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:12 crc kubenswrapper[4858]: I0218 00:34:12.500826 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 00:34:12 crc kubenswrapper[4858]: I0218 00:34:12.500812 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:12 crc kubenswrapper[4858]: I0218 00:34:12.500964 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:12 crc kubenswrapper[4858]: I0218 00:34:12.500797 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2c86afbc93ec115bbde2bccaeb88dc607db9a2dbe90a84853b8005056cf473b2"} Feb 18 00:34:12 crc kubenswrapper[4858]: I0218 00:34:12.501158 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fbb803076875076c112b456801a8cb1112a68024db92bb2a2ae382df59fd07c4"} Feb 18 00:34:12 crc kubenswrapper[4858]: I0218 00:34:12.501194 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"744627ff147a90f2aedb6e898e20e7eb34af5220349fd5217e60ed6691668bc6"} Feb 18 00:34:12 crc kubenswrapper[4858]: I0218 00:34:12.502073 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:12 crc kubenswrapper[4858]: I0218 00:34:12.502106 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:12 crc kubenswrapper[4858]: I0218 00:34:12.502119 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:12 crc kubenswrapper[4858]: I0218 00:34:12.503825 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:12 crc kubenswrapper[4858]: I0218 00:34:12.503861 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:12 crc kubenswrapper[4858]: I0218 00:34:12.503858 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:12 crc kubenswrapper[4858]: I0218 00:34:12.503877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:12 crc kubenswrapper[4858]: I0218 00:34:12.503949 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:12 crc kubenswrapper[4858]: I0218 00:34:12.503976 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:12 crc kubenswrapper[4858]: I0218 00:34:12.638988 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:34:13 crc kubenswrapper[4858]: I0218 00:34:13.360607 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 17:21:00.517843007 +0000 UTC Feb 18 00:34:13 crc kubenswrapper[4858]: I0218 00:34:13.514720 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7be1abb38febe9155510d9dde84e03338ab66c5fe1d335fbfd66e63605fd77ae"} Feb 18 00:34:13 crc kubenswrapper[4858]: I0218 00:34:13.514807 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7329f29c9762dec685d3d67dc4a5717c2900ca08c6cf726f322bb5a0aa595a5d"} Feb 18 00:34:13 crc kubenswrapper[4858]: I0218 00:34:13.514832 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:13 crc kubenswrapper[4858]: I0218 00:34:13.514902 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:13 crc kubenswrapper[4858]: I0218 00:34:13.514955 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:13 crc kubenswrapper[4858]: I0218 00:34:13.516595 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:13 crc kubenswrapper[4858]: I0218 00:34:13.516648 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:13 crc kubenswrapper[4858]: I0218 00:34:13.516673 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:13 crc kubenswrapper[4858]: I0218 00:34:13.517063 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:13 crc kubenswrapper[4858]: I0218 00:34:13.517105 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:13 crc kubenswrapper[4858]: I0218 00:34:13.517115 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:13 crc kubenswrapper[4858]: I0218 00:34:13.517287 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:13 crc kubenswrapper[4858]: I0218 00:34:13.517337 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:13 crc kubenswrapper[4858]: I0218 00:34:13.517362 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:13 crc kubenswrapper[4858]: I0218 00:34:13.584808 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 00:34:13 crc kubenswrapper[4858]: I0218 00:34:13.812597 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:13 crc kubenswrapper[4858]: I0218 00:34:13.814155 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:13 crc kubenswrapper[4858]: I0218 00:34:13.814228 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:13 crc kubenswrapper[4858]: I0218 00:34:13.814247 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:13 crc kubenswrapper[4858]: I0218 00:34:13.814286 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 00:34:13 crc kubenswrapper[4858]: I0218 00:34:13.906152 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:34:14 crc kubenswrapper[4858]: I0218 00:34:14.173443 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:34:14 crc kubenswrapper[4858]: I0218 00:34:14.184490 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:34:14 crc kubenswrapper[4858]: I0218 00:34:14.361679 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 05:10:34.338771294 +0000 UTC Feb 18 00:34:14 crc kubenswrapper[4858]: I0218 00:34:14.517884 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:14 crc kubenswrapper[4858]: I0218 00:34:14.517964 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:14 crc kubenswrapper[4858]: I0218 00:34:14.518117 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:14 crc kubenswrapper[4858]: I0218 00:34:14.519614 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:14 crc kubenswrapper[4858]: I0218 00:34:14.519676 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:14 crc kubenswrapper[4858]: I0218 00:34:14.519622 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:14 crc kubenswrapper[4858]: I0218 00:34:14.519722 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:14 crc kubenswrapper[4858]: I0218 00:34:14.519685 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:14 crc kubenswrapper[4858]: I0218 00:34:14.519740 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:14 crc kubenswrapper[4858]: I0218 00:34:14.519796 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:14 crc kubenswrapper[4858]: I0218 00:34:14.519830 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:14 crc kubenswrapper[4858]: I0218 00:34:14.519699 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:14 crc kubenswrapper[4858]: I0218 00:34:14.852288 4858 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 00:34:14 crc kubenswrapper[4858]: I0218 00:34:14.852384 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 00:34:15 crc kubenswrapper[4858]: I0218 00:34:15.362135 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 18:54:28.472600012 +0000 UTC Feb 18 00:34:15 crc kubenswrapper[4858]: I0218 00:34:15.519998 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:15 crc kubenswrapper[4858]: I0218 00:34:15.520013 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:15 crc kubenswrapper[4858]: I0218 00:34:15.521896 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:15 crc kubenswrapper[4858]: I0218 00:34:15.521971 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:15 crc kubenswrapper[4858]: I0218 00:34:15.521995 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:15 crc kubenswrapper[4858]: I0218 00:34:15.521906 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:15 crc kubenswrapper[4858]: I0218 00:34:15.522058 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:15 crc kubenswrapper[4858]: I0218 00:34:15.522089 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:16 crc kubenswrapper[4858]: I0218 00:34:16.340853 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 18 00:34:16 crc kubenswrapper[4858]: I0218 00:34:16.341187 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:16 crc kubenswrapper[4858]: I0218 00:34:16.343272 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:16 crc kubenswrapper[4858]: I0218 00:34:16.343356 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:16 crc kubenswrapper[4858]: I0218 00:34:16.343381 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:16 crc kubenswrapper[4858]: I0218 00:34:16.362295 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 14:48:22.063768241 +0000 UTC Feb 18 00:34:17 crc kubenswrapper[4858]: I0218 00:34:17.362671 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 18:05:39.713546134 +0000 UTC Feb 18 00:34:17 crc kubenswrapper[4858]: E0218 00:34:17.505454 4858 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 00:34:18 crc kubenswrapper[4858]: I0218 00:34:18.363582 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 16:01:14.465994987 +0000 UTC Feb 18 00:34:19 crc kubenswrapper[4858]: I0218 00:34:19.363899 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 04:32:45.736165528 +0000 UTC Feb 18 00:34:19 crc kubenswrapper[4858]: I0218 00:34:19.713131 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:34:19 crc kubenswrapper[4858]: I0218 00:34:19.713289 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:19 crc kubenswrapper[4858]: I0218 00:34:19.714724 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:19 crc kubenswrapper[4858]: I0218 00:34:19.714798 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:19 crc kubenswrapper[4858]: I0218 00:34:19.714819 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:20 crc kubenswrapper[4858]: I0218 00:34:20.364922 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 17:33:22.961894197 +0000 UTC Feb 18 00:34:21 crc kubenswrapper[4858]: I0218 00:34:21.349748 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 18 00:34:21 crc kubenswrapper[4858]: I0218 00:34:21.366063 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 09:37:20.066776552 +0000 UTC Feb 18 00:34:21 crc kubenswrapper[4858]: W0218 00:34:21.505273 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 18 00:34:21 crc kubenswrapper[4858]: I0218 00:34:21.505390 4858 trace.go:236] Trace[907392305]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 00:34:11.503) (total time: 10001ms): Feb 18 00:34:21 crc kubenswrapper[4858]: Trace[907392305]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:34:21.505) Feb 18 00:34:21 crc kubenswrapper[4858]: Trace[907392305]: [10.001836267s] [10.001836267s] END Feb 18 00:34:21 crc kubenswrapper[4858]: E0218 00:34:21.505420 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 18 00:34:21 crc kubenswrapper[4858]: I0218 00:34:21.570177 4858 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 00:34:21 crc kubenswrapper[4858]: I0218 00:34:21.570246 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 00:34:21 crc kubenswrapper[4858]: I0218 00:34:21.585663 4858 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 00:34:21 crc kubenswrapper[4858]: I0218 00:34:21.585737 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 00:34:22 crc kubenswrapper[4858]: I0218 00:34:22.366487 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 19:13:58.784538068 +0000 UTC Feb 18 00:34:22 crc kubenswrapper[4858]: I0218 00:34:22.797275 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 18 00:34:22 crc kubenswrapper[4858]: I0218 00:34:22.797451 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:22 crc kubenswrapper[4858]: I0218 00:34:22.798512 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:22 crc kubenswrapper[4858]: I0218 00:34:22.798548 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:22 crc kubenswrapper[4858]: I0218 00:34:22.798558 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:22 crc kubenswrapper[4858]: I0218 00:34:22.847925 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 18 00:34:23 crc kubenswrapper[4858]: I0218 00:34:23.367036 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 15:39:37.210442882 +0000 UTC Feb 18 00:34:23 crc kubenswrapper[4858]: I0218 00:34:23.542074 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:23 crc kubenswrapper[4858]: I0218 00:34:23.543589 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:23 crc kubenswrapper[4858]: I0218 00:34:23.543673 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:23 crc kubenswrapper[4858]: I0218 00:34:23.543694 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:23 crc kubenswrapper[4858]: I0218 00:34:23.561132 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 18 00:34:23 crc kubenswrapper[4858]: I0218 00:34:23.914042 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:34:23 crc kubenswrapper[4858]: I0218 00:34:23.914249 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:23 crc kubenswrapper[4858]: I0218 00:34:23.915692 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:23 crc kubenswrapper[4858]: I0218 00:34:23.915787 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:23 crc kubenswrapper[4858]: I0218 00:34:23.915816 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:23 crc kubenswrapper[4858]: I0218 00:34:23.920720 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:34:24 crc kubenswrapper[4858]: I0218 00:34:24.368214 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 05:56:21.978506186 +0000 UTC Feb 18 00:34:24 crc kubenswrapper[4858]: I0218 00:34:24.544272 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 00:34:24 crc kubenswrapper[4858]: I0218 00:34:24.544319 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:24 crc kubenswrapper[4858]: I0218 00:34:24.544348 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:34:24 crc kubenswrapper[4858]: I0218 00:34:24.546164 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:24 crc kubenswrapper[4858]: I0218 00:34:24.546211 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:24 crc kubenswrapper[4858]: I0218 00:34:24.546231 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:24 crc kubenswrapper[4858]: I0218 00:34:24.546256 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:24 crc kubenswrapper[4858]: I0218 00:34:24.546299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:24 crc kubenswrapper[4858]: I0218 00:34:24.546317 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:24 crc kubenswrapper[4858]: I0218 00:34:24.733676 4858 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 00:34:24 crc kubenswrapper[4858]: I0218 00:34:24.852359 4858 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 00:34:24 crc kubenswrapper[4858]: I0218 00:34:24.852453 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 00:34:25 crc kubenswrapper[4858]: I0218 00:34:25.368818 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 15:04:18.201119903 +0000 UTC Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.368992 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 11:54:43.96088448 +0000 UTC Feb 18 00:34:26 crc kubenswrapper[4858]: E0218 00:34:26.563425 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.571174 4858 trace.go:236] Trace[215050229]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 00:34:16.406) (total time: 10164ms): Feb 18 00:34:26 crc kubenswrapper[4858]: Trace[215050229]: ---"Objects listed" error: 10164ms (00:34:26.571) Feb 18 00:34:26 crc kubenswrapper[4858]: Trace[215050229]: [10.16451253s] [10.16451253s] END Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.571764 4858 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.571229 4858 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.572894 4858 trace.go:236] Trace[36078203]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 00:34:13.963) (total time: 12609ms): Feb 18 00:34:26 crc kubenswrapper[4858]: Trace[36078203]: ---"Objects listed" error: 12609ms (00:34:26.572) Feb 18 00:34:26 crc kubenswrapper[4858]: Trace[36078203]: [12.60952934s] [12.60952934s] END Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.572943 4858 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.573140 4858 trace.go:236] Trace[1185333816]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 00:34:11.739) (total time: 14833ms): Feb 18 00:34:26 crc kubenswrapper[4858]: Trace[1185333816]: ---"Objects listed" error: 14833ms (00:34:26.573) Feb 18 00:34:26 crc kubenswrapper[4858]: Trace[1185333816]: [14.833496151s] [14.833496151s] END Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.573168 4858 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.575844 4858 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.581850 4858 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.582200 4858 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.583941 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.583998 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.584043 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.584072 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.584090 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:26Z","lastTransitionTime":"2026-02-18T00:34:26Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 18 00:34:26 crc kubenswrapper[4858]: E0218 00:34:26.605349 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.611363 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.611423 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.611448 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.611486 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.611546 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:26Z","lastTransitionTime":"2026-02-18T00:34:26Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.619596 4858 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34076->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.619699 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34076->192.168.126.11:17697: read: connection reset by peer" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.620155 4858 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.620210 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 18 00:34:26 crc kubenswrapper[4858]: E0218 00:34:26.633367 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.639833 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.639885 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.639901 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.639925 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.639941 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:26Z","lastTransitionTime":"2026-02-18T00:34:26Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 18 00:34:26 crc kubenswrapper[4858]: E0218 00:34:26.654460 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.658729 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.658772 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.658787 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.658816 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.658831 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:26Z","lastTransitionTime":"2026-02-18T00:34:26Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 18 00:34:26 crc kubenswrapper[4858]: E0218 00:34:26.673804 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.678688 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.678762 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.678781 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.678815 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.678834 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:26Z","lastTransitionTime":"2026-02-18T00:34:26Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 18 00:34:26 crc kubenswrapper[4858]: E0218 00:34:26.690274 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:34:26 crc kubenswrapper[4858]: E0218 00:34:26.690596 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.692524 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.692573 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.692584 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.692605 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.692620 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:26Z","lastTransitionTime":"2026-02-18T00:34:26Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.795733 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.795786 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.795799 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.795821 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.795835 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:26Z","lastTransitionTime":"2026-02-18T00:34:26Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.898974 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.899057 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.899083 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.899121 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:26 crc kubenswrapper[4858]: I0218 00:34:26.899147 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:26Z","lastTransitionTime":"2026-02-18T00:34:26Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.001550 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.001620 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.001646 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.001689 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.001715 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:27Z","lastTransitionTime":"2026-02-18T00:34:27Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.104481 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.104617 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.104643 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.104802 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.104881 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:27Z","lastTransitionTime":"2026-02-18T00:34:27Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.207254 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.207283 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.207292 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.207321 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.207330 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:27Z","lastTransitionTime":"2026-02-18T00:34:27Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.309450 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.309545 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.309565 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.309602 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.309624 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:27Z","lastTransitionTime":"2026-02-18T00:34:27Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.339275 4858 apiserver.go:52] "Watching apiserver" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.345303 4858 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.345758 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.346328 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.346475 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:34:27 crc kubenswrapper[4858]: E0218 00:34:27.346639 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.346473 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.347005 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.347048 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 00:34:27 crc kubenswrapper[4858]: E0218 00:34:27.347109 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:34:27 crc kubenswrapper[4858]: E0218 00:34:27.347022 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.347189 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.350978 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.351089 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.351335 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.351461 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.351631 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.353151 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.353228 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.354829 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.359580 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.360112 4858 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.369380 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 17:56:21.4351413 +0000 UTC Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.375902 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.375963 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.376000 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.376031 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.376064 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.376094 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.376125 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.376162 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.376202 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.376246 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.376304 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.376351 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.376395 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.376438 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.376484 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.376571 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.376619 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.376670 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.376716 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.376734 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.376759 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.376873 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.376894 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.376914 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.376922 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377010 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.376967 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377056 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377092 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377129 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377160 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377190 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377219 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377229 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377251 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377286 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377319 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377352 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377384 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377456 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377487 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377560 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377593 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377626 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377656 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377689 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377724 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377758 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377791 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377825 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377858 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377890 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377922 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377957 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377993 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378026 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378058 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378089 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378121 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378201 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378238 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378270 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378303 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378335 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378369 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378400 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378432 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378462 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378517 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378549 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378590 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378623 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378657 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378690 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378721 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378753 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378800 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378836 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378866 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378896 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378935 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378968 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.379042 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.379079 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.379112 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.379146 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.379179 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.379212 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.379242 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.379276 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.379308 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.379339 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.379370 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.379420 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.379452 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.379485 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377281 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377315 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377406 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377439 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377627 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377666 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377770 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377924 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.377989 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378168 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.378432 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.379267 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.379740 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.379786 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.379998 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.380067 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.380218 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.380532 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.380584 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.380644 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.380707 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.380757 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.380789 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.380824 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.380876 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.380897 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.380975 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381031 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381125 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381159 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381050 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381191 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381225 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381258 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381290 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381321 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381359 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381390 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381419 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381452 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381484 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381542 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381579 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381613 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381646 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381680 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381711 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381744 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381778 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381812 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381845 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381878 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381915 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381947 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381980 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.382010 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.382042 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.382074 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.382105 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.382144 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.382177 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.382209 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.382244 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.382278 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.382312 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.382349 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.382381 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.382413 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.382475 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.382589 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.382639 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.382685 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.382728 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.382778 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.382822 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.382869 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.382915 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.382961 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.383010 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.383074 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.383123 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.383653 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.383870 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.383947 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.384010 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.384092 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.384149 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.384209 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.384263 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.384324 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.384375 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.384433 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.384735 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.384816 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.384911 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.384973 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.385030 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.385201 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.385311 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.385365 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.385433 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.385688 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.385765 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.385824 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.385881 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.385931 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.385985 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.386042 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.386098 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.386152 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.386206 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.386261 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.386316 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.386370 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.386423 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.386478 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.386569 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.386624 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.386675 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.386731 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.386782 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.386835 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.386889 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.387115 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.387226 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.387289 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.387345 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.387397 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.387616 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.387686 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.387742 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.387865 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.387935 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.387996 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.388061 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.388123 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.388216 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.388275 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.388335 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.388395 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.388467 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.388588 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.388719 4858 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.388764 4858 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.388796 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.388825 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.388885 4858 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.388918 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.388948 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.388971 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.388992 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.389012 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.389032 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.389052 4858 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.389074 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.389106 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.389137 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.389167 4858 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.389196 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.389216 4858 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.389237 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.389259 4858 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.389278 4858 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.389297 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.389319 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.389339 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.389360 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.389380 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.389400 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.389419 4858 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.389442 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.397308 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381226 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381367 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381547 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.381718 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.382098 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.382865 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.383044 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.383072 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.383158 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.383464 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.401719 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.383716 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.384214 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.384781 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.384826 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.384875 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.384791 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.385184 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.385423 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.385746 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.385583 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.386189 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.386310 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.386475 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.387763 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.388206 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.388824 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.390826 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.391068 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.391158 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.391427 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.391523 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.391690 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.392707 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.392774 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.393090 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.393336 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.393549 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.393773 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.393879 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.394350 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.393987 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.394676 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.394723 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.394828 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.395013 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.395047 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.395361 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.395225 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.396089 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.396130 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.396360 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.396704 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.396746 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.396787 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.397673 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.397677 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.397828 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.397975 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.398960 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.399290 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.399323 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.399378 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.399982 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.400026 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.400056 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.400181 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.400780 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.400935 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.400981 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.400992 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.401160 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.401288 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.401320 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.401034 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.401777 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.402160 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.402271 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.402578 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.402379 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.403090 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.403093 4858 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.403326 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.403370 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.403672 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.403798 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.403824 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.403816 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.404210 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.404423 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.404737 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.405657 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.405718 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.405833 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.405882 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.405053 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.406329 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.406335 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.406479 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.407015 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.407096 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.407328 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.407685 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.407873 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.407890 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.408093 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.408267 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.408546 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.408550 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: E0218 00:34:27.408732 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:34:27.908701407 +0000 UTC m=+21.214538169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.409246 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.409595 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.409696 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.409827 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.409882 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.410008 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.410084 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.409991 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.410413 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.410647 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.410897 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.412155 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.412171 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: E0218 00:34:27.412332 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.412582 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.412754 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.413316 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.413407 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: E0218 00:34:27.413663 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.413715 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.413758 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.413840 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.413915 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.414015 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.414352 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.414864 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.414920 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.415524 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.415605 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.416345 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.417467 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.417906 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.418051 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 00:34:27 crc kubenswrapper[4858]: E0218 00:34:27.412432 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:34:27.912413486 +0000 UTC m=+21.218250248 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.418968 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: E0218 00:34:27.419084 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:34:27.918997303 +0000 UTC m=+21.224834115 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.419486 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.419588 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.419660 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.419692 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.419760 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:27Z","lastTransitionTime":"2026-02-18T00:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.420027 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.422275 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.425572 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.425768 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.426337 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.428081 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 00:34:27 crc kubenswrapper[4858]: E0218 00:34:27.428339 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:34:27 crc kubenswrapper[4858]: E0218 00:34:27.428573 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:34:27 crc kubenswrapper[4858]: E0218 00:34:27.428695 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:27 crc kubenswrapper[4858]: E0218 00:34:27.428880 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 00:34:27.928830897 +0000 UTC m=+21.234667649 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.430731 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: E0218 00:34:27.432729 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:34:27 crc kubenswrapper[4858]: E0218 00:34:27.432760 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:34:27 crc kubenswrapper[4858]: E0218 00:34:27.432774 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:27 crc kubenswrapper[4858]: E0218 00:34:27.432825 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 00:34:27.932806872 +0000 UTC m=+21.238643614 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.433035 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.432839 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.435603 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.435708 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.435783 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.436003 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.436095 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.436142 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.436353 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.436440 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.436511 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.436702 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.436963 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.437212 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.437568 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.438107 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.438214 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.438670 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.438824 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.438924 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.439750 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.439723 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.439942 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.440200 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.441923 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.444474 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.447869 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.449368 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.452091 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.455566 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.459161 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.461336 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.464350 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.464435 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.466168 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.468976 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.469553 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.469774 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.471007 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.471026 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.473729 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.475719 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.477093 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.479161 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.479677 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.479996 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.481524 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.482093 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.482945 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.484384 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.485050 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.485511 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.486309 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.486950 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.487866 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.489128 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.490038 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.490190 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.490297 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.490315 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.490642 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.490690 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.490714 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.490735 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.490756 4858 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.490775 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.490796 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.490815 4858 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.490833 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.490852 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.490849 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.490872 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.490911 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.490937 4858 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.490956 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.490975 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.490992 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491009 4858 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491026 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491042 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491058 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491072 4858 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491087 4858 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491101 4858 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491115 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491129 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491142 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491156 4858 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491169 4858 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491183 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491196 4858 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491210 4858 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491223 4858 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491237 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491251 4858 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491267 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491282 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491298 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491312 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491327 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491342 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491356 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491375 4858 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491392 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491407 4858 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491422 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491436 4858 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491450 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491315 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491464 4858 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491547 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491563 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491577 4858 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491591 4858 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491606 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491619 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491633 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491646 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491659 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491673 4858 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491690 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491704 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491718 4858 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491732 4858 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491746 4858 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491761 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491775 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491789 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491803 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491817 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491831 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491845 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491858 4858 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491874 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491888 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491903 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491917 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491932 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491948 4858 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491963 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491977 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.491994 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492009 4858 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492024 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492041 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492057 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492072 4858 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492096 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492114 4858 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492129 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492143 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492157 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492171 4858 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492186 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492201 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492215 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492229 4858 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492244 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492261 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492276 4858 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492291 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492307 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492322 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492336 4858 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492351 4858 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492368 4858 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492385 4858 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492402 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492417 4858 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492432 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492447 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492462 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492478 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492520 4858 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492538 4858 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492553 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492569 4858 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492584 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492601 4858 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492616 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492631 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492690 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492708 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492724 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492740 4858 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492755 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492770 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492786 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492802 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492818 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492832 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492847 4858 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492861 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492877 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492917 4858 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492934 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492949 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492965 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492981 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492984 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.492996 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493011 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493027 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493043 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493058 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493073 4858 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493087 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493103 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493118 4858 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493139 4858 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493154 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493171 4858 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493187 4858 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493202 4858 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493217 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493232 4858 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493248 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493262 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493279 4858 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493293 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493311 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493327 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493342 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493362 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493378 4858 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493393 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493408 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493558 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.493774 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.495314 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.496052 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.497215 4858 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.497352 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.499565 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.500940 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.501652 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.501736 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.504544 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.505960 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.508016 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.509641 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.511624 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.512292 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.512731 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.513815 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.515625 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.516942 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.518047 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.518805 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.519985 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.521192 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.521963 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.523119 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.523784 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.523859 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.525872 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.526203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.526229 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.526240 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.526281 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.526295 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:27Z","lastTransitionTime":"2026-02-18T00:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.526757 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.527489 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.539684 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.550027 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.553895 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.555609 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167" exitCode=255 Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.555663 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167"} Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.560455 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.566486 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.567192 4858 scope.go:117] "RemoveContainer" containerID="29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.571774 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.583692 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.595850 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.611657 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.626800 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.628935 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.629014 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.629033 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.629058 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.629076 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:27Z","lastTransitionTime":"2026-02-18T00:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.641838 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.655523 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.666637 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.673824 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 00:34:27 crc kubenswrapper[4858]: W0218 00:34:27.689152 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-c81adbc19d3db1da750d2d18bc873ec1b07d28f163a237b45fc3452821c8793f WatchSource:0}: Error finding container c81adbc19d3db1da750d2d18bc873ec1b07d28f163a237b45fc3452821c8793f: Status 404 returned error can't find the container with id c81adbc19d3db1da750d2d18bc873ec1b07d28f163a237b45fc3452821c8793f Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.696658 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.709874 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 00:34:27 crc kubenswrapper[4858]: W0218 00:34:27.725185 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-f68b4f0f99ecefa70fd5f691ecbaa1128577a73a9975b0c4386649373b5e0f7d WatchSource:0}: Error finding container f68b4f0f99ecefa70fd5f691ecbaa1128577a73a9975b0c4386649373b5e0f7d: Status 404 returned error can't find the container with id f68b4f0f99ecefa70fd5f691ecbaa1128577a73a9975b0c4386649373b5e0f7d Feb 18 00:34:27 crc kubenswrapper[4858]: W0218 00:34:27.730128 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-9e6a1e9b27a452e99fb816ee3b0eb12c37b87f6c236a7be30d403531e03f2aee WatchSource:0}: Error finding container 9e6a1e9b27a452e99fb816ee3b0eb12c37b87f6c236a7be30d403531e03f2aee: Status 404 returned error can't find the container with id 9e6a1e9b27a452e99fb816ee3b0eb12c37b87f6c236a7be30d403531e03f2aee Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.730360 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.730386 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.730395 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.730409 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.730419 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:27Z","lastTransitionTime":"2026-02-18T00:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.833734 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.834072 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.834086 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.834103 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.834115 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:27Z","lastTransitionTime":"2026-02-18T00:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.936641 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.936679 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.936688 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.936702 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.936711 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:27Z","lastTransitionTime":"2026-02-18T00:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.999641 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.999719 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.999743 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:27 crc kubenswrapper[4858]: E0218 00:34:27.999776 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:34:28.999747054 +0000 UTC m=+22.305583796 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.999822 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:34:27 crc kubenswrapper[4858]: E0218 00:34:27.999834 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:34:27 crc kubenswrapper[4858]: I0218 00:34:27.999870 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:28 crc kubenswrapper[4858]: E0218 00:34:27.999887 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:34:28.999870518 +0000 UTC m=+22.305707360 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:34:28 crc kubenswrapper[4858]: E0218 00:34:27.999942 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:34:28 crc kubenswrapper[4858]: E0218 00:34:27.999975 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:34:28 crc kubenswrapper[4858]: E0218 00:34:27.999978 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:34:28 crc kubenswrapper[4858]: E0218 00:34:27.999993 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:28 crc kubenswrapper[4858]: E0218 00:34:28.000020 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:34:29.000011472 +0000 UTC m=+22.305848224 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:34:28 crc kubenswrapper[4858]: E0218 00:34:28.000048 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 00:34:29.000031752 +0000 UTC m=+22.305868574 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:28 crc kubenswrapper[4858]: E0218 00:34:28.000098 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:34:28 crc kubenswrapper[4858]: E0218 00:34:28.000136 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:34:28 crc kubenswrapper[4858]: E0218 00:34:28.000150 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:28 crc kubenswrapper[4858]: E0218 00:34:28.000208 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 00:34:29.000188976 +0000 UTC m=+22.306025738 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.038630 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.038671 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.038680 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.038698 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.038708 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:28Z","lastTransitionTime":"2026-02-18T00:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.141003 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.141036 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.141044 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.141058 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.141067 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:28Z","lastTransitionTime":"2026-02-18T00:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.244211 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.244259 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.244270 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.244284 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.244294 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:28Z","lastTransitionTime":"2026-02-18T00:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.346755 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.346795 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.346808 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.346825 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.346835 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:28Z","lastTransitionTime":"2026-02-18T00:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.370359 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 01:35:54.934965817 +0000 UTC Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.449455 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.449542 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.449565 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.449596 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.449615 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:28Z","lastTransitionTime":"2026-02-18T00:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.512416 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.552799 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.552870 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.552895 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.552924 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.552950 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:28Z","lastTransitionTime":"2026-02-18T00:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.564843 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb"} Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.564899 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c81adbc19d3db1da750d2d18bc873ec1b07d28f163a237b45fc3452821c8793f"} Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.568205 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.571069 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f"} Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.571340 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.575977 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa"} Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.576028 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55"} Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.576048 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9e6a1e9b27a452e99fb816ee3b0eb12c37b87f6c236a7be30d403531e03f2aee"} Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.581445 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f68b4f0f99ecefa70fd5f691ecbaa1128577a73a9975b0c4386649373b5e0f7d"} Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.601913 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.623223 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.638528 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.655467 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.655532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.655541 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.655556 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.655565 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:28Z","lastTransitionTime":"2026-02-18T00:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.661970 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.676144 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.693212 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.707115 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.720817 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.734400 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.745568 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.758254 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.758306 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.758317 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.758333 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.758347 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:28Z","lastTransitionTime":"2026-02-18T00:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.761632 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.774538 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.793585 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.812335 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.861342 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.861411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.861429 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.861469 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.861487 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:28Z","lastTransitionTime":"2026-02-18T00:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.964302 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.964343 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.964352 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.964365 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:28 crc kubenswrapper[4858]: I0218 00:34:28.964375 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:28Z","lastTransitionTime":"2026-02-18T00:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.008212 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.008339 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.008387 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:29 crc kubenswrapper[4858]: E0218 00:34:29.008416 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:34:31.008384511 +0000 UTC m=+24.314221283 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.008462 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.008537 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:29 crc kubenswrapper[4858]: E0218 00:34:29.008545 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:34:29 crc kubenswrapper[4858]: E0218 00:34:29.008589 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:34:29 crc kubenswrapper[4858]: E0218 00:34:29.008625 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:34:31.008609706 +0000 UTC m=+24.314446478 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:34:29 crc kubenswrapper[4858]: E0218 00:34:29.008630 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:34:29 crc kubenswrapper[4858]: E0218 00:34:29.008649 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:29 crc kubenswrapper[4858]: E0218 00:34:29.008659 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:34:29 crc kubenswrapper[4858]: E0218 00:34:29.008680 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:34:29 crc kubenswrapper[4858]: E0218 00:34:29.008716 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 00:34:31.008692518 +0000 UTC m=+24.314529260 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:29 crc kubenswrapper[4858]: E0218 00:34:29.008769 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:34:31.00874691 +0000 UTC m=+24.314583732 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:34:29 crc kubenswrapper[4858]: E0218 00:34:29.008690 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:34:29 crc kubenswrapper[4858]: E0218 00:34:29.008811 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:29 crc kubenswrapper[4858]: E0218 00:34:29.008861 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 00:34:31.008846102 +0000 UTC m=+24.314682954 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.067325 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.067393 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.067412 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.067439 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.067459 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:29Z","lastTransitionTime":"2026-02-18T00:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.170650 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.170712 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.170729 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.170754 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.170777 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:29Z","lastTransitionTime":"2026-02-18T00:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.274273 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.274374 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.274395 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.274421 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.274440 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:29Z","lastTransitionTime":"2026-02-18T00:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.371521 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 09:26:57.763398651 +0000 UTC Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.376627 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.376696 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.376716 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.376742 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.376760 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:29Z","lastTransitionTime":"2026-02-18T00:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.419296 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.419379 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.419467 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:34:29 crc kubenswrapper[4858]: E0218 00:34:29.419436 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:34:29 crc kubenswrapper[4858]: E0218 00:34:29.419598 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:34:29 crc kubenswrapper[4858]: E0218 00:34:29.419710 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.479402 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.479466 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.479527 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.479554 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.479571 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:29Z","lastTransitionTime":"2026-02-18T00:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.582399 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.582453 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.582470 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.582517 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.582535 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:29Z","lastTransitionTime":"2026-02-18T00:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.685043 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.685082 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.685092 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.685110 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.685119 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:29Z","lastTransitionTime":"2026-02-18T00:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.787924 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.787964 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.787972 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.787985 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.787994 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:29Z","lastTransitionTime":"2026-02-18T00:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.890416 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.890473 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.890584 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.890610 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.890628 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:29Z","lastTransitionTime":"2026-02-18T00:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.992919 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.993001 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.993027 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.993056 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:29 crc kubenswrapper[4858]: I0218 00:34:29.993079 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:29Z","lastTransitionTime":"2026-02-18T00:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.095964 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.096019 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.096038 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.096063 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.096082 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:30Z","lastTransitionTime":"2026-02-18T00:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.198694 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.198737 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.198748 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.198763 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.198775 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:30Z","lastTransitionTime":"2026-02-18T00:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.302275 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.302356 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.302381 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.302404 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.302421 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:30Z","lastTransitionTime":"2026-02-18T00:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.371690 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 01:11:28.021085387 +0000 UTC Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.405703 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.405764 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.405778 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.405798 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.405811 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:30Z","lastTransitionTime":"2026-02-18T00:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.508269 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.508336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.508348 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.508392 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.508404 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:30Z","lastTransitionTime":"2026-02-18T00:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.611434 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.611548 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.611565 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.611582 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.611637 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:30Z","lastTransitionTime":"2026-02-18T00:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.714737 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.714777 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.714788 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.714803 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.714813 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:30Z","lastTransitionTime":"2026-02-18T00:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.818366 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.818401 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.818412 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.818427 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.818439 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:30Z","lastTransitionTime":"2026-02-18T00:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.920875 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.920953 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.920975 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.920998 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:30 crc kubenswrapper[4858]: I0218 00:34:30.921016 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:30Z","lastTransitionTime":"2026-02-18T00:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.024171 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.024264 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.024284 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.024305 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.024322 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:31Z","lastTransitionTime":"2026-02-18T00:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.026601 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.026687 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.026737 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.026776 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.026819 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:31 crc kubenswrapper[4858]: E0218 00:34:31.026859 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:34:35.02682142 +0000 UTC m=+28.332658192 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:34:31 crc kubenswrapper[4858]: E0218 00:34:31.026932 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:34:31 crc kubenswrapper[4858]: E0218 00:34:31.026977 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:34:31 crc kubenswrapper[4858]: E0218 00:34:31.026977 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:34:31 crc kubenswrapper[4858]: E0218 00:34:31.026998 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:31 crc kubenswrapper[4858]: E0218 00:34:31.026983 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:34:31 crc kubenswrapper[4858]: E0218 00:34:31.027066 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:34:31 crc kubenswrapper[4858]: E0218 00:34:31.027081 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:31 crc kubenswrapper[4858]: E0218 00:34:31.027054 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:34:35.027033716 +0000 UTC m=+28.332870488 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:34:31 crc kubenswrapper[4858]: E0218 00:34:31.027154 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 00:34:35.027132809 +0000 UTC m=+28.332969581 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:31 crc kubenswrapper[4858]: E0218 00:34:31.027187 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 00:34:35.02717492 +0000 UTC m=+28.333011692 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:31 crc kubenswrapper[4858]: E0218 00:34:31.027079 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:34:31 crc kubenswrapper[4858]: E0218 00:34:31.027297 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:34:35.027263012 +0000 UTC m=+28.333099774 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.127034 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.127106 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.127123 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.127148 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.127167 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:31Z","lastTransitionTime":"2026-02-18T00:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.230702 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.230777 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.230802 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.230831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.230857 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:31Z","lastTransitionTime":"2026-02-18T00:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.334167 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.334250 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.334274 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.334304 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.334327 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:31Z","lastTransitionTime":"2026-02-18T00:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.372251 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 17:21:50.947631707 +0000 UTC Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.419026 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.419095 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.419199 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:34:31 crc kubenswrapper[4858]: E0218 00:34:31.419260 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:34:31 crc kubenswrapper[4858]: E0218 00:34:31.419432 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:34:31 crc kubenswrapper[4858]: E0218 00:34:31.419660 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.437598 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.437657 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.437689 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.437714 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.437731 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:31Z","lastTransitionTime":"2026-02-18T00:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.540845 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.540910 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.540920 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.540936 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.540948 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:31Z","lastTransitionTime":"2026-02-18T00:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.591816 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6"} Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.615253 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.635563 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.643760 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.643806 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.643816 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.643832 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.643845 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:31Z","lastTransitionTime":"2026-02-18T00:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.656787 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.674315 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.695431 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.712994 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.731900 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.745674 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.745724 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.745735 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.745751 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.745762 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:31Z","lastTransitionTime":"2026-02-18T00:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.848525 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.848577 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.848591 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.848609 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.848627 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:31Z","lastTransitionTime":"2026-02-18T00:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.857789 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.864040 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.870333 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.884365 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.908597 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.930680 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.944269 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.951055 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.951090 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.951121 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.951138 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.951151 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:31Z","lastTransitionTime":"2026-02-18T00:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.963714 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.977846 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:31 crc kubenswrapper[4858]: I0218 00:34:31.991219 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.006780 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.023667 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.037919 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.051419 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.053487 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.053556 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.053570 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.053590 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.053605 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:32Z","lastTransitionTime":"2026-02-18T00:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.071258 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.092425 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.107769 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.127537 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.157177 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.157230 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.157248 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.157284 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.157301 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:32Z","lastTransitionTime":"2026-02-18T00:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.260330 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.260406 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.260429 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.260457 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.260529 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:32Z","lastTransitionTime":"2026-02-18T00:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.362707 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.362937 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.363018 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.363110 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.363205 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:32Z","lastTransitionTime":"2026-02-18T00:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.373303 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 22:04:38.177870145 +0000 UTC Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.466852 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.466918 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.466947 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.466981 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.467004 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:32Z","lastTransitionTime":"2026-02-18T00:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.576436 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.576489 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.576527 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.576548 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.576565 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:32Z","lastTransitionTime":"2026-02-18T00:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.580045 4858 csr.go:261] certificate signing request csr-rp4mp is approved, waiting to be issued Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.600673 4858 csr.go:257] certificate signing request csr-rp4mp is issued Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.678642 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.678683 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.678695 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.678713 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.678726 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:32Z","lastTransitionTime":"2026-02-18T00:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.780950 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.780981 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.780989 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.781001 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.781011 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:32Z","lastTransitionTime":"2026-02-18T00:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.883860 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.883910 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.883924 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.883942 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.883955 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:32Z","lastTransitionTime":"2026-02-18T00:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.986120 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.986163 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.986172 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.986189 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:32 crc kubenswrapper[4858]: I0218 00:34:32.986200 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:32Z","lastTransitionTime":"2026-02-18T00:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.013517 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-jgxjq"] Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.013873 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jgxjq" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.014551 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-n4pmf"] Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.015363 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.015623 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.015762 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.016134 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.018009 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-sr8bs"] Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.018449 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.018708 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.018740 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.018844 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.018858 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.018834 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.020540 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.021441 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.031630 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.045026 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.059154 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.069812 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.083907 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.087949 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.087978 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.087990 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.088005 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.088014 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:33Z","lastTransitionTime":"2026-02-18T00:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.101645 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.119057 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.132294 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.144434 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-system-cni-dir\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.144903 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-multus-conf-dir\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.144994 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/631d8e25-82dd-4462-b98d-f076e7264b67-multus-daemon-config\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.145077 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-host-run-multus-certs\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.145181 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e24aebe5-ff91-47a8-b642-d7dcc25f9089-tuning-conf-dir\") pod \"multus-additional-cni-plugins-n4pmf\" (UID: \"e24aebe5-ff91-47a8-b642-d7dcc25f9089\") " pod="openshift-multus/multus-additional-cni-plugins-n4pmf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.145273 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-os-release\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.145369 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-host-run-k8s-cni-cncf-io\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.145441 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-host-var-lib-cni-bin\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.145557 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e24aebe5-ff91-47a8-b642-d7dcc25f9089-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-n4pmf\" (UID: \"e24aebe5-ff91-47a8-b642-d7dcc25f9089\") " pod="openshift-multus/multus-additional-cni-plugins-n4pmf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.145658 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e24aebe5-ff91-47a8-b642-d7dcc25f9089-system-cni-dir\") pod \"multus-additional-cni-plugins-n4pmf\" (UID: \"e24aebe5-ff91-47a8-b642-d7dcc25f9089\") " pod="openshift-multus/multus-additional-cni-plugins-n4pmf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.145740 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e24aebe5-ff91-47a8-b642-d7dcc25f9089-cni-binary-copy\") pod \"multus-additional-cni-plugins-n4pmf\" (UID: \"e24aebe5-ff91-47a8-b642-d7dcc25f9089\") " pod="openshift-multus/multus-additional-cni-plugins-n4pmf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.145815 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-etc-kubernetes\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.145890 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n62vk\" (UniqueName: \"kubernetes.io/projected/e24aebe5-ff91-47a8-b642-d7dcc25f9089-kube-api-access-n62vk\") pod \"multus-additional-cni-plugins-n4pmf\" (UID: \"e24aebe5-ff91-47a8-b642-d7dcc25f9089\") " pod="openshift-multus/multus-additional-cni-plugins-n4pmf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.145984 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-cnibin\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.146074 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/12e0eed0-c83b-4418-9587-7175dec43dfb-hosts-file\") pod \"node-resolver-jgxjq\" (UID: \"12e0eed0-c83b-4418-9587-7175dec43dfb\") " pod="openshift-dns/node-resolver-jgxjq" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.146165 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-host-run-netns\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.146250 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-hostroot\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.146326 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e24aebe5-ff91-47a8-b642-d7dcc25f9089-cnibin\") pod \"multus-additional-cni-plugins-n4pmf\" (UID: \"e24aebe5-ff91-47a8-b642-d7dcc25f9089\") " pod="openshift-multus/multus-additional-cni-plugins-n4pmf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.146404 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e24aebe5-ff91-47a8-b642-d7dcc25f9089-os-release\") pod \"multus-additional-cni-plugins-n4pmf\" (UID: \"e24aebe5-ff91-47a8-b642-d7dcc25f9089\") " pod="openshift-multus/multus-additional-cni-plugins-n4pmf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.146510 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-host-var-lib-cni-multus\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.146603 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-multus-socket-dir-parent\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.146670 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt52b\" (UniqueName: \"kubernetes.io/projected/631d8e25-82dd-4462-b98d-f076e7264b67-kube-api-access-bt52b\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.146758 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcgtl\" (UniqueName: \"kubernetes.io/projected/12e0eed0-c83b-4418-9587-7175dec43dfb-kube-api-access-xcgtl\") pod \"node-resolver-jgxjq\" (UID: \"12e0eed0-c83b-4418-9587-7175dec43dfb\") " pod="openshift-dns/node-resolver-jgxjq" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.146873 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-host-var-lib-kubelet\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.146983 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-multus-cni-dir\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.147080 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/631d8e25-82dd-4462-b98d-f076e7264b67-cni-binary-copy\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.147061 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.160331 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.181835 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.190374 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.190624 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.190726 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.190815 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.190905 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:33Z","lastTransitionTime":"2026-02-18T00:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.195764 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.234319 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.247583 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-multus-cni-dir\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.247625 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/631d8e25-82dd-4462-b98d-f076e7264b67-cni-binary-copy\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.247645 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-system-cni-dir\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.247662 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-multus-conf-dir\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.247681 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/631d8e25-82dd-4462-b98d-f076e7264b67-multus-daemon-config\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.248470 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-host-run-multus-certs\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.247751 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-multus-conf-dir\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.248530 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-multus-cni-dir\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.248430 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/631d8e25-82dd-4462-b98d-f076e7264b67-multus-daemon-config\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.248423 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/631d8e25-82dd-4462-b98d-f076e7264b67-cni-binary-copy\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.248570 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e24aebe5-ff91-47a8-b642-d7dcc25f9089-tuning-conf-dir\") pod \"multus-additional-cni-plugins-n4pmf\" (UID: \"e24aebe5-ff91-47a8-b642-d7dcc25f9089\") " pod="openshift-multus/multus-additional-cni-plugins-n4pmf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.248669 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-os-release\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.248689 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-host-run-k8s-cni-cncf-io\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.247852 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-system-cni-dir\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.248600 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-host-run-multus-certs\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.248738 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-os-release\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.248757 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-host-run-k8s-cni-cncf-io\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.248806 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-host-var-lib-cni-bin\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.248836 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e24aebe5-ff91-47a8-b642-d7dcc25f9089-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-n4pmf\" (UID: \"e24aebe5-ff91-47a8-b642-d7dcc25f9089\") " pod="openshift-multus/multus-additional-cni-plugins-n4pmf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.248842 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-host-var-lib-cni-bin\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.248874 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e24aebe5-ff91-47a8-b642-d7dcc25f9089-system-cni-dir\") pod \"multus-additional-cni-plugins-n4pmf\" (UID: \"e24aebe5-ff91-47a8-b642-d7dcc25f9089\") " pod="openshift-multus/multus-additional-cni-plugins-n4pmf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.248858 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e24aebe5-ff91-47a8-b642-d7dcc25f9089-system-cni-dir\") pod \"multus-additional-cni-plugins-n4pmf\" (UID: \"e24aebe5-ff91-47a8-b642-d7dcc25f9089\") " pod="openshift-multus/multus-additional-cni-plugins-n4pmf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.248952 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e24aebe5-ff91-47a8-b642-d7dcc25f9089-cni-binary-copy\") pod \"multus-additional-cni-plugins-n4pmf\" (UID: \"e24aebe5-ff91-47a8-b642-d7dcc25f9089\") " pod="openshift-multus/multus-additional-cni-plugins-n4pmf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.248981 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-etc-kubernetes\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.249059 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n62vk\" (UniqueName: \"kubernetes.io/projected/e24aebe5-ff91-47a8-b642-d7dcc25f9089-kube-api-access-n62vk\") pod \"multus-additional-cni-plugins-n4pmf\" (UID: \"e24aebe5-ff91-47a8-b642-d7dcc25f9089\") " pod="openshift-multus/multus-additional-cni-plugins-n4pmf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.249088 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-cnibin\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.249113 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-etc-kubernetes\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.249116 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/12e0eed0-c83b-4418-9587-7175dec43dfb-hosts-file\") pod \"node-resolver-jgxjq\" (UID: \"12e0eed0-c83b-4418-9587-7175dec43dfb\") " pod="openshift-dns/node-resolver-jgxjq" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.249149 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-host-run-netns\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.249160 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/12e0eed0-c83b-4418-9587-7175dec43dfb-hosts-file\") pod \"node-resolver-jgxjq\" (UID: \"12e0eed0-c83b-4418-9587-7175dec43dfb\") " pod="openshift-dns/node-resolver-jgxjq" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.249168 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-hostroot\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.249188 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-hostroot\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.249187 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-cnibin\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.249201 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e24aebe5-ff91-47a8-b642-d7dcc25f9089-cnibin\") pod \"multus-additional-cni-plugins-n4pmf\" (UID: \"e24aebe5-ff91-47a8-b642-d7dcc25f9089\") " pod="openshift-multus/multus-additional-cni-plugins-n4pmf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.249210 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-host-run-netns\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.249229 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e24aebe5-ff91-47a8-b642-d7dcc25f9089-cnibin\") pod \"multus-additional-cni-plugins-n4pmf\" (UID: \"e24aebe5-ff91-47a8-b642-d7dcc25f9089\") " pod="openshift-multus/multus-additional-cni-plugins-n4pmf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.249241 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e24aebe5-ff91-47a8-b642-d7dcc25f9089-tuning-conf-dir\") pod \"multus-additional-cni-plugins-n4pmf\" (UID: \"e24aebe5-ff91-47a8-b642-d7dcc25f9089\") " pod="openshift-multus/multus-additional-cni-plugins-n4pmf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.249254 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-host-var-lib-cni-multus\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.249231 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-host-var-lib-cni-multus\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.249289 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e24aebe5-ff91-47a8-b642-d7dcc25f9089-os-release\") pod \"multus-additional-cni-plugins-n4pmf\" (UID: \"e24aebe5-ff91-47a8-b642-d7dcc25f9089\") " pod="openshift-multus/multus-additional-cni-plugins-n4pmf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.249330 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-multus-socket-dir-parent\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.249356 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt52b\" (UniqueName: \"kubernetes.io/projected/631d8e25-82dd-4462-b98d-f076e7264b67-kube-api-access-bt52b\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.249378 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-host-var-lib-kubelet\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.249401 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcgtl\" (UniqueName: \"kubernetes.io/projected/12e0eed0-c83b-4418-9587-7175dec43dfb-kube-api-access-xcgtl\") pod \"node-resolver-jgxjq\" (UID: \"12e0eed0-c83b-4418-9587-7175dec43dfb\") " pod="openshift-dns/node-resolver-jgxjq" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.249406 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-multus-socket-dir-parent\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.249415 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e24aebe5-ff91-47a8-b642-d7dcc25f9089-os-release\") pod \"multus-additional-cni-plugins-n4pmf\" (UID: \"e24aebe5-ff91-47a8-b642-d7dcc25f9089\") " pod="openshift-multus/multus-additional-cni-plugins-n4pmf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.249601 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/631d8e25-82dd-4462-b98d-f076e7264b67-host-var-lib-kubelet\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.249639 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e24aebe5-ff91-47a8-b642-d7dcc25f9089-cni-binary-copy\") pod \"multus-additional-cni-plugins-n4pmf\" (UID: \"e24aebe5-ff91-47a8-b642-d7dcc25f9089\") " pod="openshift-multus/multus-additional-cni-plugins-n4pmf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.249836 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e24aebe5-ff91-47a8-b642-d7dcc25f9089-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-n4pmf\" (UID: \"e24aebe5-ff91-47a8-b642-d7dcc25f9089\") " pod="openshift-multus/multus-additional-cni-plugins-n4pmf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.275470 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.289127 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcgtl\" (UniqueName: \"kubernetes.io/projected/12e0eed0-c83b-4418-9587-7175dec43dfb-kube-api-access-xcgtl\") pod \"node-resolver-jgxjq\" (UID: \"12e0eed0-c83b-4418-9587-7175dec43dfb\") " pod="openshift-dns/node-resolver-jgxjq" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.293112 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n62vk\" (UniqueName: \"kubernetes.io/projected/e24aebe5-ff91-47a8-b642-d7dcc25f9089-kube-api-access-n62vk\") pod \"multus-additional-cni-plugins-n4pmf\" (UID: \"e24aebe5-ff91-47a8-b642-d7dcc25f9089\") " pod="openshift-multus/multus-additional-cni-plugins-n4pmf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.294344 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt52b\" (UniqueName: \"kubernetes.io/projected/631d8e25-82dd-4462-b98d-f076e7264b67-kube-api-access-bt52b\") pod \"multus-sr8bs\" (UID: \"631d8e25-82dd-4462-b98d-f076e7264b67\") " pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.298628 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.298650 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.298660 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.298673 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.298684 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:33Z","lastTransitionTime":"2026-02-18T00:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.304840 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.330427 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jgxjq" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.338600 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.351589 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-sr8bs" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.359653 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: W0218 00:34:33.361028 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode24aebe5_ff91_47a8_b642_d7dcc25f9089.slice/crio-a2027f5cbadf75551152937cdc1455baf0c4ec6179865fd57d0d7baa3ac7f9a5 WatchSource:0}: Error finding container a2027f5cbadf75551152937cdc1455baf0c4ec6179865fd57d0d7baa3ac7f9a5: Status 404 returned error can't find the container with id a2027f5cbadf75551152937cdc1455baf0c4ec6179865fd57d0d7baa3ac7f9a5 Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.374231 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 06:52:10.319150109 +0000 UTC Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.401846 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.408131 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.408158 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.408166 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.408179 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.408187 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:33Z","lastTransitionTime":"2026-02-18T00:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.418978 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-cbdbf"] Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.419345 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.420238 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:34:33 crc kubenswrapper[4858]: E0218 00:34:33.420332 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.420902 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:33 crc kubenswrapper[4858]: E0218 00:34:33.420961 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.421511 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:34:33 crc kubenswrapper[4858]: E0218 00:34:33.421563 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.422636 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.422830 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.423092 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.423258 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.423325 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.423465 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.427079 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jjq7k"] Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.427948 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.432203 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.432253 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.432425 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.432475 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.432634 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.432830 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.435251 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.442043 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.463197 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.476688 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.490993 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.505560 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.510528 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.510570 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.510580 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.510596 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.510606 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:33Z","lastTransitionTime":"2026-02-18T00:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.521630 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.533878 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.545364 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.553121 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-cni-netd\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.553167 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/62c71780-47e7-4e14-9b93-60050f6f3141-ovn-node-metrics-cert\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.553201 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-run-ovn-kubernetes\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.553246 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7172df49-6116-4968-a2b5-a1afb116568b-proxy-tls\") pod \"machine-config-daemon-cbdbf\" (UID: \"7172df49-6116-4968-a2b5-a1afb116568b\") " pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.553271 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-log-socket\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.553328 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7172df49-6116-4968-a2b5-a1afb116568b-rootfs\") pod \"machine-config-daemon-cbdbf\" (UID: \"7172df49-6116-4968-a2b5-a1afb116568b\") " pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.553348 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-etc-openvswitch\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.553366 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/62c71780-47e7-4e14-9b93-60050f6f3141-ovnkube-script-lib\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.553403 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/62c71780-47e7-4e14-9b93-60050f6f3141-ovnkube-config\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.553435 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-systemd-units\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.553481 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-var-lib-openvswitch\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.553529 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7172df49-6116-4968-a2b5-a1afb116568b-mcd-auth-proxy-config\") pod \"machine-config-daemon-cbdbf\" (UID: \"7172df49-6116-4968-a2b5-a1afb116568b\") " pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.553552 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-cni-bin\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.553575 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-node-log\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.553618 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4snxj\" (UniqueName: \"kubernetes.io/projected/7172df49-6116-4968-a2b5-a1afb116568b-kube-api-access-4snxj\") pod \"machine-config-daemon-cbdbf\" (UID: \"7172df49-6116-4968-a2b5-a1afb116568b\") " pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.553647 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-run-netns\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.553684 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dd5n\" (UniqueName: \"kubernetes.io/projected/62c71780-47e7-4e14-9b93-60050f6f3141-kube-api-access-5dd5n\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.553703 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-run-openvswitch\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.553736 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-slash\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.553793 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.553816 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/62c71780-47e7-4e14-9b93-60050f6f3141-env-overrides\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.553834 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-kubelet\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.553850 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-run-systemd\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.553869 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-run-ovn\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.557529 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.572525 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.588531 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.598347 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" event={"ID":"e24aebe5-ff91-47a8-b642-d7dcc25f9089","Type":"ContainerStarted","Data":"d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665"} Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.598404 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" event={"ID":"e24aebe5-ff91-47a8-b642-d7dcc25f9089","Type":"ContainerStarted","Data":"a2027f5cbadf75551152937cdc1455baf0c4ec6179865fd57d0d7baa3ac7f9a5"} Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.599643 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jgxjq" event={"ID":"12e0eed0-c83b-4418-9587-7175dec43dfb","Type":"ContainerStarted","Data":"cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38"} Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.599679 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jgxjq" event={"ID":"12e0eed0-c83b-4418-9587-7175dec43dfb","Type":"ContainerStarted","Data":"8ebb9db67f94a6246ce4af333299116100a7e7f23cd62fbaa3176d8daa330e3c"} Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.600935 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sr8bs" event={"ID":"631d8e25-82dd-4462-b98d-f076e7264b67","Type":"ContainerStarted","Data":"fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4"} Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.600980 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sr8bs" event={"ID":"631d8e25-82dd-4462-b98d-f076e7264b67","Type":"ContainerStarted","Data":"6b9e160a98ff805e56e779f36deb038d5ac5788c34d44ab755a1af63f867cc54"} Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.601467 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-18 00:29:32 +0000 UTC, rotation deadline is 2026-12-02 23:31:06.567085221 +0000 UTC Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.601536 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6910h56m32.965551351s for next certificate rotation Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.602941 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.612987 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.613050 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.613065 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.613082 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.613094 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:33Z","lastTransitionTime":"2026-02-18T00:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.629564 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.646338 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.655044 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-slash\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.655099 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.655127 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-kubelet\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.655147 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-run-systemd\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.655170 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-run-ovn\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.655188 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/62c71780-47e7-4e14-9b93-60050f6f3141-env-overrides\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.655281 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-slash\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.655327 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.655320 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-kubelet\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.655360 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-run-ovn\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.655373 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-cni-netd\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.655430 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-run-systemd\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.655572 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/62c71780-47e7-4e14-9b93-60050f6f3141-ovn-node-metrics-cert\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.655636 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-run-ovn-kubernetes\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.655430 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-cni-netd\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.655667 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7172df49-6116-4968-a2b5-a1afb116568b-proxy-tls\") pod \"machine-config-daemon-cbdbf\" (UID: \"7172df49-6116-4968-a2b5-a1afb116568b\") " pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.655774 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-log-socket\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.655778 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-run-ovn-kubernetes\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.655846 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/62c71780-47e7-4e14-9b93-60050f6f3141-env-overrides\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.655852 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-log-socket\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.655884 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7172df49-6116-4968-a2b5-a1afb116568b-rootfs\") pod \"machine-config-daemon-cbdbf\" (UID: \"7172df49-6116-4968-a2b5-a1afb116568b\") " pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.655935 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-etc-openvswitch\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.655972 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7172df49-6116-4968-a2b5-a1afb116568b-rootfs\") pod \"machine-config-daemon-cbdbf\" (UID: \"7172df49-6116-4968-a2b5-a1afb116568b\") " pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.656015 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-etc-openvswitch\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.656000 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/62c71780-47e7-4e14-9b93-60050f6f3141-ovnkube-script-lib\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.656087 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-systemd-units\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.656103 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-var-lib-openvswitch\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.656121 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/62c71780-47e7-4e14-9b93-60050f6f3141-ovnkube-config\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.656152 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7172df49-6116-4968-a2b5-a1afb116568b-mcd-auth-proxy-config\") pod \"machine-config-daemon-cbdbf\" (UID: \"7172df49-6116-4968-a2b5-a1afb116568b\") " pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.656173 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-cni-bin\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.656194 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-node-log\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.656213 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4snxj\" (UniqueName: \"kubernetes.io/projected/7172df49-6116-4968-a2b5-a1afb116568b-kube-api-access-4snxj\") pod \"machine-config-daemon-cbdbf\" (UID: \"7172df49-6116-4968-a2b5-a1afb116568b\") " pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.656252 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-run-netns\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.656268 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dd5n\" (UniqueName: \"kubernetes.io/projected/62c71780-47e7-4e14-9b93-60050f6f3141-kube-api-access-5dd5n\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.656294 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-run-openvswitch\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.656344 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-run-openvswitch\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.656386 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-node-log\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.656392 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-var-lib-openvswitch\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.656801 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/62c71780-47e7-4e14-9b93-60050f6f3141-ovnkube-script-lib\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.656861 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-systemd-units\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.657197 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-run-netns\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.657401 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/62c71780-47e7-4e14-9b93-60050f6f3141-ovnkube-config\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.657830 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7172df49-6116-4968-a2b5-a1afb116568b-mcd-auth-proxy-config\") pod \"machine-config-daemon-cbdbf\" (UID: \"7172df49-6116-4968-a2b5-a1afb116568b\") " pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.657920 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-cni-bin\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.658865 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7172df49-6116-4968-a2b5-a1afb116568b-proxy-tls\") pod \"machine-config-daemon-cbdbf\" (UID: \"7172df49-6116-4968-a2b5-a1afb116568b\") " pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.659028 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/62c71780-47e7-4e14-9b93-60050f6f3141-ovn-node-metrics-cert\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.666445 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.678762 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4snxj\" (UniqueName: \"kubernetes.io/projected/7172df49-6116-4968-a2b5-a1afb116568b-kube-api-access-4snxj\") pod \"machine-config-daemon-cbdbf\" (UID: \"7172df49-6116-4968-a2b5-a1afb116568b\") " pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.678905 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dd5n\" (UniqueName: \"kubernetes.io/projected/62c71780-47e7-4e14-9b93-60050f6f3141-kube-api-access-5dd5n\") pod \"ovnkube-node-jjq7k\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.683428 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.696318 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.712929 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.715182 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.715215 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.715223 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.715237 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.715245 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:33Z","lastTransitionTime":"2026-02-18T00:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.723750 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.733253 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.745806 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.745793 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.753584 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.759740 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: W0218 00:34:33.766405 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62c71780_47e7_4e14_9b93_60050f6f3141.slice/crio-b089f5d406742cc184f82326fee6a53a24ed29bae92c39f55b92d9e792a0fc8c WatchSource:0}: Error finding container b089f5d406742cc184f82326fee6a53a24ed29bae92c39f55b92d9e792a0fc8c: Status 404 returned error can't find the container with id b089f5d406742cc184f82326fee6a53a24ed29bae92c39f55b92d9e792a0fc8c Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.773756 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.786316 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.796989 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.807070 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.817296 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.817333 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.817348 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.817366 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.817378 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:33Z","lastTransitionTime":"2026-02-18T00:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.817458 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.831939 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.919969 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.920041 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.920063 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.920097 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:33 crc kubenswrapper[4858]: I0218 00:34:33.920121 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:33Z","lastTransitionTime":"2026-02-18T00:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.023431 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.023528 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.023548 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.023572 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.023592 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:34Z","lastTransitionTime":"2026-02-18T00:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.126483 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.126559 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.126572 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.126594 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.126607 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:34Z","lastTransitionTime":"2026-02-18T00:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.228716 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.228782 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.228799 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.228821 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.228837 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:34Z","lastTransitionTime":"2026-02-18T00:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.331591 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.331647 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.331662 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.331683 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.331697 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:34Z","lastTransitionTime":"2026-02-18T00:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.375143 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 04:31:48.527594792 +0000 UTC Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.434291 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.434326 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.434337 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.434352 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.434365 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:34Z","lastTransitionTime":"2026-02-18T00:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.536300 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.536345 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.536356 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.536373 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.536384 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:34Z","lastTransitionTime":"2026-02-18T00:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.608705 4858 generic.go:334] "Generic (PLEG): container finished" podID="e24aebe5-ff91-47a8-b642-d7dcc25f9089" containerID="d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665" exitCode=0 Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.608769 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" event={"ID":"e24aebe5-ff91-47a8-b642-d7dcc25f9089","Type":"ContainerDied","Data":"d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665"} Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.613052 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerStarted","Data":"b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca"} Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.613155 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerStarted","Data":"50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645"} Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.613187 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerStarted","Data":"daed3bc3e7edffb91b56e4e3fd96e9131e493ade55f1405c4f5ee2ca70a4ef34"} Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.615899 4858 generic.go:334] "Generic (PLEG): container finished" podID="62c71780-47e7-4e14-9b93-60050f6f3141" containerID="bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd" exitCode=0 Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.615962 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerDied","Data":"bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd"} Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.615999 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerStarted","Data":"b089f5d406742cc184f82326fee6a53a24ed29bae92c39f55b92d9e792a0fc8c"} Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.625513 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.638903 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.638929 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.638938 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.638952 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.638961 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:34Z","lastTransitionTime":"2026-02-18T00:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.646300 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.662753 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.714813 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.736247 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.741807 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.741842 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.741850 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.741863 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.741878 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:34Z","lastTransitionTime":"2026-02-18T00:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.749849 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.760438 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.777347 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.789347 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.800261 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.810874 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.823512 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.834423 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.844776 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.844826 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.844840 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.844858 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.844871 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:34Z","lastTransitionTime":"2026-02-18T00:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.846867 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.858988 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.879318 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.892570 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.904157 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.910870 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-v2whc"] Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.911531 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-v2whc" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.913592 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.913739 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.914455 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.915272 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.919222 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.934022 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.947156 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.947202 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.947213 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.947229 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.947240 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:34Z","lastTransitionTime":"2026-02-18T00:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.948508 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.960780 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.973233 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.976677 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j42x\" (UniqueName: \"kubernetes.io/projected/f362c73a-7069-42a2-b85e-4e823a1a8fb3-kube-api-access-8j42x\") pod \"node-ca-v2whc\" (UID: \"f362c73a-7069-42a2-b85e-4e823a1a8fb3\") " pod="openshift-image-registry/node-ca-v2whc" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.976726 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f362c73a-7069-42a2-b85e-4e823a1a8fb3-serviceca\") pod \"node-ca-v2whc\" (UID: \"f362c73a-7069-42a2-b85e-4e823a1a8fb3\") " pod="openshift-image-registry/node-ca-v2whc" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.976750 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f362c73a-7069-42a2-b85e-4e823a1a8fb3-host\") pod \"node-ca-v2whc\" (UID: \"f362c73a-7069-42a2-b85e-4e823a1a8fb3\") " pod="openshift-image-registry/node-ca-v2whc" Feb 18 00:34:34 crc kubenswrapper[4858]: I0218 00:34:34.983544 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.002283 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.016292 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.030112 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.041556 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.049890 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.049922 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.049930 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.049947 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.049956 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:35Z","lastTransitionTime":"2026-02-18T00:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.052707 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.068417 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.077671 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.077776 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:35 crc kubenswrapper[4858]: E0218 00:34:35.077809 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:34:43.077778736 +0000 UTC m=+36.383615478 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.077860 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f362c73a-7069-42a2-b85e-4e823a1a8fb3-serviceca\") pod \"node-ca-v2whc\" (UID: \"f362c73a-7069-42a2-b85e-4e823a1a8fb3\") " pod="openshift-image-registry/node-ca-v2whc" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.077912 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.077969 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f362c73a-7069-42a2-b85e-4e823a1a8fb3-host\") pod \"node-ca-v2whc\" (UID: \"f362c73a-7069-42a2-b85e-4e823a1a8fb3\") " pod="openshift-image-registry/node-ca-v2whc" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.077999 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:34:35 crc kubenswrapper[4858]: E0218 00:34:35.077913 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.078036 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:34:35 crc kubenswrapper[4858]: E0218 00:34:35.078040 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.078074 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j42x\" (UniqueName: \"kubernetes.io/projected/f362c73a-7069-42a2-b85e-4e823a1a8fb3-kube-api-access-8j42x\") pod \"node-ca-v2whc\" (UID: \"f362c73a-7069-42a2-b85e-4e823a1a8fb3\") " pod="openshift-image-registry/node-ca-v2whc" Feb 18 00:34:35 crc kubenswrapper[4858]: E0218 00:34:35.078122 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:34:43.078104294 +0000 UTC m=+36.383941016 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:34:35 crc kubenswrapper[4858]: E0218 00:34:35.078139 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:34:43.078132144 +0000 UTC m=+36.383968876 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:34:35 crc kubenswrapper[4858]: E0218 00:34:35.078290 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:34:35 crc kubenswrapper[4858]: E0218 00:34:35.078316 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:34:35 crc kubenswrapper[4858]: E0218 00:34:35.078336 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:35 crc kubenswrapper[4858]: E0218 00:34:35.078396 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 00:34:43.0783705 +0000 UTC m=+36.384207262 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:35 crc kubenswrapper[4858]: E0218 00:34:35.078428 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:34:35 crc kubenswrapper[4858]: E0218 00:34:35.078448 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:34:35 crc kubenswrapper[4858]: E0218 00:34:35.078461 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.078458 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f362c73a-7069-42a2-b85e-4e823a1a8fb3-host\") pod \"node-ca-v2whc\" (UID: \"f362c73a-7069-42a2-b85e-4e823a1a8fb3\") " pod="openshift-image-registry/node-ca-v2whc" Feb 18 00:34:35 crc kubenswrapper[4858]: E0218 00:34:35.078525 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 00:34:43.078489933 +0000 UTC m=+36.384326685 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.079307 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f362c73a-7069-42a2-b85e-4e823a1a8fb3-serviceca\") pod \"node-ca-v2whc\" (UID: \"f362c73a-7069-42a2-b85e-4e823a1a8fb3\") " pod="openshift-image-registry/node-ca-v2whc" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.080006 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.094613 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.099791 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j42x\" (UniqueName: \"kubernetes.io/projected/f362c73a-7069-42a2-b85e-4e823a1a8fb3-kube-api-access-8j42x\") pod \"node-ca-v2whc\" (UID: \"f362c73a-7069-42a2-b85e-4e823a1a8fb3\") " pod="openshift-image-registry/node-ca-v2whc" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.109367 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.127417 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.141400 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.152932 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.152959 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.152967 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.152980 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.152990 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:35Z","lastTransitionTime":"2026-02-18T00:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.155656 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.166868 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.199804 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.212874 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.223604 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.255650 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.255958 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.255970 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.255990 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.256003 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:35Z","lastTransitionTime":"2026-02-18T00:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.263015 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-v2whc" Feb 18 00:34:35 crc kubenswrapper[4858]: W0218 00:34:35.277141 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf362c73a_7069_42a2_b85e_4e823a1a8fb3.slice/crio-fc58cc74b86183138566a4bd5916e8af0489830743a9963b019264a5109838fa WatchSource:0}: Error finding container fc58cc74b86183138566a4bd5916e8af0489830743a9963b019264a5109838fa: Status 404 returned error can't find the container with id fc58cc74b86183138566a4bd5916e8af0489830743a9963b019264a5109838fa Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.357839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.357880 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.357894 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.357910 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.357921 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:35Z","lastTransitionTime":"2026-02-18T00:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.376252 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 01:51:11.763099289 +0000 UTC Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.420325 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.420586 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:34:35 crc kubenswrapper[4858]: E0218 00:34:35.420654 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.420712 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:34:35 crc kubenswrapper[4858]: E0218 00:34:35.420852 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:34:35 crc kubenswrapper[4858]: E0218 00:34:35.420942 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.460185 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.460226 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.460239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.460257 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.460270 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:35Z","lastTransitionTime":"2026-02-18T00:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.562870 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.562908 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.562917 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.562932 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.562942 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:35Z","lastTransitionTime":"2026-02-18T00:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.623998 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerStarted","Data":"06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e"} Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.624062 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerStarted","Data":"e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674"} Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.624081 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerStarted","Data":"36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4"} Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.624097 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerStarted","Data":"c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990"} Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.624114 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerStarted","Data":"3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c"} Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.625529 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-v2whc" event={"ID":"f362c73a-7069-42a2-b85e-4e823a1a8fb3","Type":"ContainerStarted","Data":"30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944"} Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.625585 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-v2whc" event={"ID":"f362c73a-7069-42a2-b85e-4e823a1a8fb3","Type":"ContainerStarted","Data":"fc58cc74b86183138566a4bd5916e8af0489830743a9963b019264a5109838fa"} Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.627892 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" event={"ID":"e24aebe5-ff91-47a8-b642-d7dcc25f9089","Type":"ContainerStarted","Data":"2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec"} Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.640949 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.651529 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.664517 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.664552 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.664560 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.664574 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.664586 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:35Z","lastTransitionTime":"2026-02-18T00:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.666359 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.677900 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.689269 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.702547 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.715260 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.725166 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.735345 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.748670 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.762473 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.766470 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.766526 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.766538 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.766557 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.766568 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:35Z","lastTransitionTime":"2026-02-18T00:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.777038 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.788230 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.808530 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.822711 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.837005 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.852680 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.869293 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.869329 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.869338 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.869352 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.869361 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:35Z","lastTransitionTime":"2026-02-18T00:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.870348 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.884320 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.902092 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.946058 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.993078 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.993134 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.993151 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.993175 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:35 crc kubenswrapper[4858]: I0218 00:34:35.993193 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:35Z","lastTransitionTime":"2026-02-18T00:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.004583 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.029048 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.066886 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.096259 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.096325 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.096348 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.096378 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.096397 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:36Z","lastTransitionTime":"2026-02-18T00:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.107128 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.159856 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.188838 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.199093 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.199156 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.199174 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.199199 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.199218 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:36Z","lastTransitionTime":"2026-02-18T00:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.223166 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.302347 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.302398 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.302416 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.302442 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.302462 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:36Z","lastTransitionTime":"2026-02-18T00:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.377382 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 03:14:08.924677418 +0000 UTC Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.408659 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.408755 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.408782 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.408815 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.408838 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:36Z","lastTransitionTime":"2026-02-18T00:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.511717 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.511792 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.511814 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.511849 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.511871 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:36Z","lastTransitionTime":"2026-02-18T00:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.614812 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.614879 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.614897 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.614922 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.614939 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:36Z","lastTransitionTime":"2026-02-18T00:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.642623 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerStarted","Data":"fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9"} Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.645106 4858 generic.go:334] "Generic (PLEG): container finished" podID="e24aebe5-ff91-47a8-b642-d7dcc25f9089" containerID="2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec" exitCode=0 Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.645163 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" event={"ID":"e24aebe5-ff91-47a8-b642-d7dcc25f9089","Type":"ContainerDied","Data":"2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec"} Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.684960 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.700647 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.717842 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.717899 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.717916 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.717980 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.718012 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:36Z","lastTransitionTime":"2026-02-18T00:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.722956 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.741994 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.761626 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.774851 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.788849 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.803734 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.819743 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.821485 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.821559 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.821576 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.821596 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.821615 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:36Z","lastTransitionTime":"2026-02-18T00:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.831363 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.840878 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.853758 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.871979 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.884859 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.924110 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.924144 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.924152 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.924165 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.924174 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:36Z","lastTransitionTime":"2026-02-18T00:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.937102 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.937141 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.937152 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.937167 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.937178 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:36Z","lastTransitionTime":"2026-02-18T00:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:36 crc kubenswrapper[4858]: E0218 00:34:36.948644 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.951958 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.951989 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.951997 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.952010 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.952019 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:36Z","lastTransitionTime":"2026-02-18T00:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:36 crc kubenswrapper[4858]: E0218 00:34:36.964439 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.967570 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.967604 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.967614 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.967627 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.967635 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:36Z","lastTransitionTime":"2026-02-18T00:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:36 crc kubenswrapper[4858]: E0218 00:34:36.986757 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.992220 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.992263 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.992275 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.992291 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:36 crc kubenswrapper[4858]: I0218 00:34:36.992303 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:36Z","lastTransitionTime":"2026-02-18T00:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:37 crc kubenswrapper[4858]: E0218 00:34:37.028219 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.031941 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.031970 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.031978 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.031993 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.032001 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:37Z","lastTransitionTime":"2026-02-18T00:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:37 crc kubenswrapper[4858]: E0218 00:34:37.043432 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: E0218 00:34:37.043574 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.044876 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.044918 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.044935 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.044957 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.044975 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:37Z","lastTransitionTime":"2026-02-18T00:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.147245 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.147294 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.147308 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.147330 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.147346 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:37Z","lastTransitionTime":"2026-02-18T00:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.192066 4858 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.249719 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.249984 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.250082 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.250186 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.250284 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:37Z","lastTransitionTime":"2026-02-18T00:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.352941 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.353001 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.353019 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.353043 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.353062 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:37Z","lastTransitionTime":"2026-02-18T00:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.377613 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 16:35:39.586411578 +0000 UTC Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.419346 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.419392 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.419454 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:34:37 crc kubenswrapper[4858]: E0218 00:34:37.419598 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:34:37 crc kubenswrapper[4858]: E0218 00:34:37.419712 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:34:37 crc kubenswrapper[4858]: E0218 00:34:37.419806 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.438047 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.452817 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.455940 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.455987 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.456000 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.456023 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.456037 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:37Z","lastTransitionTime":"2026-02-18T00:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.472526 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.491165 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.513169 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.538719 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.560051 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.560119 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.560143 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.560172 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.560196 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:37Z","lastTransitionTime":"2026-02-18T00:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.565403 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.583156 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.598395 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.620524 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.639488 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.651360 4858 generic.go:334] "Generic (PLEG): container finished" podID="e24aebe5-ff91-47a8-b642-d7dcc25f9089" containerID="bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a" exitCode=0 Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.651419 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" event={"ID":"e24aebe5-ff91-47a8-b642-d7dcc25f9089","Type":"ContainerDied","Data":"bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a"} Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.661093 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.662538 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.662594 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.662614 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.662643 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.662665 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:37Z","lastTransitionTime":"2026-02-18T00:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.681178 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.714332 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.733791 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.749022 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.765061 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.765114 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.765131 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.765155 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.765172 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:37Z","lastTransitionTime":"2026-02-18T00:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.768554 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.784701 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.804050 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.828309 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.845795 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.865316 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.867905 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.867962 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.867981 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.868008 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.868026 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:37Z","lastTransitionTime":"2026-02-18T00:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.882017 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.900932 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.919365 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.935114 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.963760 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.969814 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.969844 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.969854 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.969867 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.969876 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:37Z","lastTransitionTime":"2026-02-18T00:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:37 crc kubenswrapper[4858]: I0218 00:34:37.981809 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.072448 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.072555 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.072580 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.072609 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.072631 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:38Z","lastTransitionTime":"2026-02-18T00:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.176687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.176754 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.176773 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.176799 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.176816 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:38Z","lastTransitionTime":"2026-02-18T00:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.279744 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.279835 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.279859 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.279893 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.279916 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:38Z","lastTransitionTime":"2026-02-18T00:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.378101 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 01:05:13.427046722 +0000 UTC Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.382602 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.382676 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.382694 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.382718 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.382737 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:38Z","lastTransitionTime":"2026-02-18T00:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.485067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.485122 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.485141 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.485163 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.485179 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:38Z","lastTransitionTime":"2026-02-18T00:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.588291 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.588357 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.588373 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.588397 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.588416 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:38Z","lastTransitionTime":"2026-02-18T00:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.662919 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerStarted","Data":"4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c"} Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.667988 4858 generic.go:334] "Generic (PLEG): container finished" podID="e24aebe5-ff91-47a8-b642-d7dcc25f9089" containerID="444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3" exitCode=0 Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.668037 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" event={"ID":"e24aebe5-ff91-47a8-b642-d7dcc25f9089","Type":"ContainerDied","Data":"444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3"} Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.691791 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.691857 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.691879 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.691910 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.691933 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:38Z","lastTransitionTime":"2026-02-18T00:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.693217 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:38Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.711254 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:38Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.727375 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:38Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.739288 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:38Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.755701 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:38Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.771374 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:38Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.788585 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:38Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.794444 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.794477 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.794487 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.794518 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.794529 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:38Z","lastTransitionTime":"2026-02-18T00:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.821607 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:38Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.836137 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:38Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.851765 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:38Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.861463 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:38Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.873878 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:38Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.886728 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:38Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.896930 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.896977 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.896994 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.897018 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.897040 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:38Z","lastTransitionTime":"2026-02-18T00:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.901162 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:38Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.999571 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.999611 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.999622 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.999640 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:38 crc kubenswrapper[4858]: I0218 00:34:38.999650 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:38Z","lastTransitionTime":"2026-02-18T00:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.102200 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.102260 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.102282 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.102311 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.102332 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:39Z","lastTransitionTime":"2026-02-18T00:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.205153 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.205563 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.205581 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.205606 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.205622 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:39Z","lastTransitionTime":"2026-02-18T00:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.308272 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.308323 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.308340 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.308364 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.308381 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:39Z","lastTransitionTime":"2026-02-18T00:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.378678 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 12:39:12.96287734 +0000 UTC Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.410820 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.410879 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.410894 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.410921 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.410940 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:39Z","lastTransitionTime":"2026-02-18T00:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.419324 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.419329 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:34:39 crc kubenswrapper[4858]: E0218 00:34:39.419594 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.419633 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:39 crc kubenswrapper[4858]: E0218 00:34:39.419788 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:34:39 crc kubenswrapper[4858]: E0218 00:34:39.419998 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.513388 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.513437 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.513454 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.513477 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.513516 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:39Z","lastTransitionTime":"2026-02-18T00:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.616977 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.617018 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.617029 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.617046 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.617056 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:39Z","lastTransitionTime":"2026-02-18T00:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.679805 4858 generic.go:334] "Generic (PLEG): container finished" podID="e24aebe5-ff91-47a8-b642-d7dcc25f9089" containerID="8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6" exitCode=0 Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.679853 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" event={"ID":"e24aebe5-ff91-47a8-b642-d7dcc25f9089","Type":"ContainerDied","Data":"8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6"} Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.703682 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:39Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.721672 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.721734 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.721758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.721791 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.721817 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:39Z","lastTransitionTime":"2026-02-18T00:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.725761 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:39Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.747389 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:39Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.763717 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:39Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.787760 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:39Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.804754 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:39Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.820349 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:39Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.824544 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.824609 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.824658 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.824690 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.824713 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:39Z","lastTransitionTime":"2026-02-18T00:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.835430 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:39Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.854183 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:39Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.867661 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:39Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.882138 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:39Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.899539 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:39Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.916319 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:39Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.928235 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.928275 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.928285 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.928303 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.928313 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:39Z","lastTransitionTime":"2026-02-18T00:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:39 crc kubenswrapper[4858]: I0218 00:34:39.936178 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:39Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.031222 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.031265 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.031277 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.031299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.031313 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:40Z","lastTransitionTime":"2026-02-18T00:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.133617 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.133681 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.133700 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.133724 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.133741 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:40Z","lastTransitionTime":"2026-02-18T00:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.237096 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.237175 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.237199 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.237236 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.237261 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:40Z","lastTransitionTime":"2026-02-18T00:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.339790 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.339838 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.339854 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.339877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.339894 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:40Z","lastTransitionTime":"2026-02-18T00:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.379343 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 23:46:44.5256838 +0000 UTC Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.443075 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.443116 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.443132 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.443158 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.443175 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:40Z","lastTransitionTime":"2026-02-18T00:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.546353 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.546413 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.546431 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.546456 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.546480 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:40Z","lastTransitionTime":"2026-02-18T00:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.650276 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.650336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.650353 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.650376 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.650393 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:40Z","lastTransitionTime":"2026-02-18T00:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.689263 4858 generic.go:334] "Generic (PLEG): container finished" podID="e24aebe5-ff91-47a8-b642-d7dcc25f9089" containerID="21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8" exitCode=0 Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.689357 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" event={"ID":"e24aebe5-ff91-47a8-b642-d7dcc25f9089","Type":"ContainerDied","Data":"21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8"} Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.698295 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerStarted","Data":"b5b54eacd74bf5dfa5b438ea9707decc01e32a167826b995c337496d999ad63a"} Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.698741 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.698782 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.709945 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:40Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.731851 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:40Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.753873 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.753913 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.753934 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.753953 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.753965 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:40Z","lastTransitionTime":"2026-02-18T00:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.753857 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:40Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.761997 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.762951 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.779397 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:40Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.794623 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:40Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.808004 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:40Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.827606 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:40Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.842670 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:40Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.856271 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.856323 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.856339 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.856360 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.856375 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:40Z","lastTransitionTime":"2026-02-18T00:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.857201 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:40Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.869865 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:40Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.878861 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:40Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.896013 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:40Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.908463 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:40Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.920635 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:40Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.935790 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:40Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.947086 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:40Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.959653 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.959700 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.959736 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.959758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.959769 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:40Z","lastTransitionTime":"2026-02-18T00:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.960598 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:40Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.975923 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:40Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:40 crc kubenswrapper[4858]: I0218 00:34:40.989610 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:40Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.003346 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:41Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.019811 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:41Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.039421 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:41Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.091700 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:41Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.093596 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.093651 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.093671 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.093697 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.093715 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:41Z","lastTransitionTime":"2026-02-18T00:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.110262 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:41Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.142015 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b54eacd74bf5dfa5b438ea9707decc01e32a167826b995c337496d999ad63a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:41Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.158846 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:41Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.174564 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:41Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.191768 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:41Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.197924 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.198004 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.198023 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.198553 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.198617 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:41Z","lastTransitionTime":"2026-02-18T00:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.302130 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.302209 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.302235 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.302264 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.302287 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:41Z","lastTransitionTime":"2026-02-18T00:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.379869 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 10:43:11.769519152 +0000 UTC Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.405988 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.406063 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.406087 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.406118 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.406142 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:41Z","lastTransitionTime":"2026-02-18T00:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.418732 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.418804 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.418900 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:34:41 crc kubenswrapper[4858]: E0218 00:34:41.418894 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:34:41 crc kubenswrapper[4858]: E0218 00:34:41.419052 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:34:41 crc kubenswrapper[4858]: E0218 00:34:41.419195 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.509658 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.509745 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.509771 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.509814 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.509838 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:41Z","lastTransitionTime":"2026-02-18T00:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.612791 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.612859 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.612883 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.612912 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.612936 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:41Z","lastTransitionTime":"2026-02-18T00:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.710376 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" event={"ID":"e24aebe5-ff91-47a8-b642-d7dcc25f9089","Type":"ContainerStarted","Data":"0415bdcb0566e3795b0c5796437e9d0c35761f6836fab59e2de8a8448143c75e"} Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.710426 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.717613 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.717674 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.717692 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.717716 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.717734 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:41Z","lastTransitionTime":"2026-02-18T00:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.737272 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:41Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.760190 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:41Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.778229 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:41Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.800147 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b54eacd74bf5dfa5b438ea9707decc01e32a167826b995c337496d999ad63a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:41Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.819626 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:41Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.820613 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.820670 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.820692 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.820720 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.820742 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:41Z","lastTransitionTime":"2026-02-18T00:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.836636 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:41Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.855558 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:41Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.876707 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:41Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.893641 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:41Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.910573 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:41Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.923722 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.923791 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.923811 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.923842 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.923864 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:41Z","lastTransitionTime":"2026-02-18T00:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.930922 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0415bdcb0566e3795b0c5796437e9d0c35761f6836fab59e2de8a8448143c75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:41Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.952471 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:41Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.968374 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:41Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:41 crc kubenswrapper[4858]: I0218 00:34:41.984793 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:41Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.026696 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.026763 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.026776 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.026799 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.026816 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:42Z","lastTransitionTime":"2026-02-18T00:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.130260 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.130298 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.130313 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.130333 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.130344 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:42Z","lastTransitionTime":"2026-02-18T00:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.232779 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.232805 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.232813 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.232826 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.232835 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:42Z","lastTransitionTime":"2026-02-18T00:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.335431 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.335538 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.335557 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.335581 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.335600 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:42Z","lastTransitionTime":"2026-02-18T00:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.380252 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 01:16:33.723268319 +0000 UTC Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.437647 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.437671 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.437679 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.437744 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.437754 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:42Z","lastTransitionTime":"2026-02-18T00:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.540516 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.540555 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.540566 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.540584 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.540594 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:42Z","lastTransitionTime":"2026-02-18T00:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.643611 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.643660 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.643673 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.643697 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.643713 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:42Z","lastTransitionTime":"2026-02-18T00:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.645656 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.671677 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b54eacd74bf5dfa5b438ea9707decc01e32a167826b995c337496d999ad63a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:42Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.694790 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:42Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.716289 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.721202 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:42Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.741231 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:42Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.745987 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.746040 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.746055 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.746107 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.746123 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:42Z","lastTransitionTime":"2026-02-18T00:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.759574 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:42Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.776112 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:42Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.804783 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:42Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.822314 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:42Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.836962 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:42Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.849006 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.849062 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.849076 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.849097 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.849110 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:42Z","lastTransitionTime":"2026-02-18T00:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.853559 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:42Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.868305 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:42Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.888742 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0415bdcb0566e3795b0c5796437e9d0c35761f6836fab59e2de8a8448143c75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:42Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.904944 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:42Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.919189 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:42Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.952419 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.952485 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.952533 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.952564 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:42 crc kubenswrapper[4858]: I0218 00:34:42.952586 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:42Z","lastTransitionTime":"2026-02-18T00:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.055866 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.055931 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.055949 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.055972 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.055989 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:43Z","lastTransitionTime":"2026-02-18T00:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.160150 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.160245 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.160272 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.160300 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.160321 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:43Z","lastTransitionTime":"2026-02-18T00:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.169697 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.169815 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.169863 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.169911 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.169947 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:34:43 crc kubenswrapper[4858]: E0218 00:34:43.170161 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:34:43 crc kubenswrapper[4858]: E0218 00:34:43.170197 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:34:43 crc kubenswrapper[4858]: E0218 00:34:43.170216 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:43 crc kubenswrapper[4858]: E0218 00:34:43.170280 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 00:34:59.170258871 +0000 UTC m=+52.476095633 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:43 crc kubenswrapper[4858]: E0218 00:34:43.170659 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:34:59.170640351 +0000 UTC m=+52.476477123 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:34:43 crc kubenswrapper[4858]: E0218 00:34:43.170745 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:34:43 crc kubenswrapper[4858]: E0218 00:34:43.170790 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:34:59.170777174 +0000 UTC m=+52.476613936 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:34:43 crc kubenswrapper[4858]: E0218 00:34:43.170841 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:34:43 crc kubenswrapper[4858]: E0218 00:34:43.170876 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:34:59.170865516 +0000 UTC m=+52.476702278 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:34:43 crc kubenswrapper[4858]: E0218 00:34:43.170950 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:34:43 crc kubenswrapper[4858]: E0218 00:34:43.170967 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:34:43 crc kubenswrapper[4858]: E0218 00:34:43.170982 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:43 crc kubenswrapper[4858]: E0218 00:34:43.171037 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 00:34:59.17102437 +0000 UTC m=+52.476861132 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.263488 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.263585 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.263607 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.263639 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.263660 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:43Z","lastTransitionTime":"2026-02-18T00:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.366131 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.366207 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.366225 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.366243 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.366257 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:43Z","lastTransitionTime":"2026-02-18T00:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.380770 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 02:35:49.141589367 +0000 UTC Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.419335 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:43 crc kubenswrapper[4858]: E0218 00:34:43.419471 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.419530 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.419604 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:34:43 crc kubenswrapper[4858]: E0218 00:34:43.419644 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:34:43 crc kubenswrapper[4858]: E0218 00:34:43.419836 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.468691 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.468748 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.468766 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.468793 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.468811 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:43Z","lastTransitionTime":"2026-02-18T00:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.572412 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.572478 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.572548 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.572580 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.572637 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:43Z","lastTransitionTime":"2026-02-18T00:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.676353 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.676449 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.676468 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.676521 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.676538 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:43Z","lastTransitionTime":"2026-02-18T00:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.722090 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jjq7k_62c71780-47e7-4e14-9b93-60050f6f3141/ovnkube-controller/0.log" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.730550 4858 generic.go:334] "Generic (PLEG): container finished" podID="62c71780-47e7-4e14-9b93-60050f6f3141" containerID="b5b54eacd74bf5dfa5b438ea9707decc01e32a167826b995c337496d999ad63a" exitCode=1 Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.730608 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerDied","Data":"b5b54eacd74bf5dfa5b438ea9707decc01e32a167826b995c337496d999ad63a"} Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.731896 4858 scope.go:117] "RemoveContainer" containerID="b5b54eacd74bf5dfa5b438ea9707decc01e32a167826b995c337496d999ad63a" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.753640 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:43Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.776472 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:43Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.780000 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.780329 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.780548 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.780717 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.780853 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:43Z","lastTransitionTime":"2026-02-18T00:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.800990 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:43Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.826470 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0415bdcb0566e3795b0c5796437e9d0c35761f6836fab59e2de8a8448143c75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:43Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.845225 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:43Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.868017 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:43Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.884580 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.884620 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.884640 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.884670 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.884688 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:43Z","lastTransitionTime":"2026-02-18T00:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.886229 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:43Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.903193 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:43Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.923156 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:43Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.946729 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:43Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.960829 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:43Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.981246 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5b54eacd74bf5dfa5b438ea9707decc01e32a167826b995c337496d999ad63a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b54eacd74bf5dfa5b438ea9707decc01e32a167826b995c337496d999ad63a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:43Z\\\",\\\"message\\\":\\\"/factory.go:141\\\\nI0218 00:34:43.501291 6179 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:43.501356 6179 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:43.501345 6179 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 00:34:43.501378 6179 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 00:34:43.501406 6179 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 00:34:43.501417 6179 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 00:34:43.501458 6179 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 00:34:43.501480 6179 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 00:34:43.501523 6179 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 00:34:43.501551 6179 factory.go:656] Stopping watch factory\\\\nI0218 00:34:43.501576 6179 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 00:34:43.501601 6179 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 00:34:43.501623 6179 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 00:34:43.501639 6179 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 00:34:43.501725 6179 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:43Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.990039 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.990069 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.990082 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.990103 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.990117 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:43Z","lastTransitionTime":"2026-02-18T00:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:43 crc kubenswrapper[4858]: I0218 00:34:43.999379 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:43Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.012106 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:44Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.092668 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.092735 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.092746 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.092768 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.092784 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:44Z","lastTransitionTime":"2026-02-18T00:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.195606 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.195677 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.195701 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.195731 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.195753 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:44Z","lastTransitionTime":"2026-02-18T00:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.298989 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.299041 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.299059 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.299081 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.299098 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:44Z","lastTransitionTime":"2026-02-18T00:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.381958 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 07:16:28.020572507 +0000 UTC Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.402130 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.402168 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.402179 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.402194 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.402206 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:44Z","lastTransitionTime":"2026-02-18T00:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.504006 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.504049 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.504061 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.504078 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.504088 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:44Z","lastTransitionTime":"2026-02-18T00:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.607024 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.607083 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.607101 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.607126 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.607143 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:44Z","lastTransitionTime":"2026-02-18T00:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.710207 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.710256 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.710272 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.710299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.710316 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:44Z","lastTransitionTime":"2026-02-18T00:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.736071 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jjq7k_62c71780-47e7-4e14-9b93-60050f6f3141/ovnkube-controller/0.log" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.740057 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerStarted","Data":"278932ec03614938f670b2edaf7e4ddf5efccf9e64c460cc33c339bb997c1f87"} Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.740251 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.763106 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:44Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.784281 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:44Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.802304 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:44Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.813761 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.813826 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.813850 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.813881 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.813904 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:44Z","lastTransitionTime":"2026-02-18T00:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.823217 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:44Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.840179 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:44Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.857206 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:44Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.874671 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:44Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.902175 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0415bdcb0566e3795b0c5796437e9d0c35761f6836fab59e2de8a8448143c75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:44Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.916223 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.916280 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.916299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.916327 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.916345 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:44Z","lastTransitionTime":"2026-02-18T00:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.923104 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:44Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.939405 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:44Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.952899 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:44Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:44 crc kubenswrapper[4858]: I0218 00:34:44.984778 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://278932ec03614938f670b2edaf7e4ddf5efccf9e64c460cc33c339bb997c1f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b54eacd74bf5dfa5b438ea9707decc01e32a167826b995c337496d999ad63a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:43Z\\\",\\\"message\\\":\\\"/factory.go:141\\\\nI0218 00:34:43.501291 6179 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:43.501356 6179 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:43.501345 6179 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 00:34:43.501378 6179 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 00:34:43.501406 6179 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 00:34:43.501417 6179 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 00:34:43.501458 6179 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 00:34:43.501480 6179 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 00:34:43.501523 6179 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 00:34:43.501551 6179 factory.go:656] Stopping watch factory\\\\nI0218 00:34:43.501576 6179 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 00:34:43.501601 6179 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 00:34:43.501623 6179 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 00:34:43.501639 6179 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 00:34:43.501725 6179 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:44Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.004477 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:45Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.019145 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:45Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.019952 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.019995 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.020016 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.020040 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.020057 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:45Z","lastTransitionTime":"2026-02-18T00:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.122661 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.122721 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.122737 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.122764 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.122782 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:45Z","lastTransitionTime":"2026-02-18T00:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.226299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.226358 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.226375 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.226399 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.226416 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:45Z","lastTransitionTime":"2026-02-18T00:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.330065 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.330135 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.330152 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.330176 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.330193 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:45Z","lastTransitionTime":"2026-02-18T00:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.382158 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 08:58:21.457192235 +0000 UTC Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.418883 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.418903 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.418838 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:34:45 crc kubenswrapper[4858]: E0218 00:34:45.419085 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:34:45 crc kubenswrapper[4858]: E0218 00:34:45.419005 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:34:45 crc kubenswrapper[4858]: E0218 00:34:45.419223 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.432802 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.432858 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.432876 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.432900 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.432919 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:45Z","lastTransitionTime":"2026-02-18T00:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.536217 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.536290 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.536313 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.536343 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.536365 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:45Z","lastTransitionTime":"2026-02-18T00:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.641138 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.641190 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.641203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.641222 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.641241 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:45Z","lastTransitionTime":"2026-02-18T00:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.743665 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.743721 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.743737 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.743758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.743775 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:45Z","lastTransitionTime":"2026-02-18T00:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.747294 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jjq7k_62c71780-47e7-4e14-9b93-60050f6f3141/ovnkube-controller/1.log" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.748190 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jjq7k_62c71780-47e7-4e14-9b93-60050f6f3141/ovnkube-controller/0.log" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.752255 4858 generic.go:334] "Generic (PLEG): container finished" podID="62c71780-47e7-4e14-9b93-60050f6f3141" containerID="278932ec03614938f670b2edaf7e4ddf5efccf9e64c460cc33c339bb997c1f87" exitCode=1 Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.752327 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerDied","Data":"278932ec03614938f670b2edaf7e4ddf5efccf9e64c460cc33c339bb997c1f87"} Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.752394 4858 scope.go:117] "RemoveContainer" containerID="b5b54eacd74bf5dfa5b438ea9707decc01e32a167826b995c337496d999ad63a" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.753587 4858 scope.go:117] "RemoveContainer" containerID="278932ec03614938f670b2edaf7e4ddf5efccf9e64c460cc33c339bb997c1f87" Feb 18 00:34:45 crc kubenswrapper[4858]: E0218 00:34:45.753883 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jjq7k_openshift-ovn-kubernetes(62c71780-47e7-4e14-9b93-60050f6f3141)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.776041 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0415bdcb0566e3795b0c5796437e9d0c35761f6836fab59e2de8a8448143c75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:45Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.794158 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:45Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.811191 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:45Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.830360 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:45Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.846632 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.846682 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.846695 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.846712 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.846727 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:45Z","lastTransitionTime":"2026-02-18T00:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.851188 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:45Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.866030 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:45Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.884441 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:45Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.889200 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml"] Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.889868 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.891572 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.892358 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.900413 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:45Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.922680 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://278932ec03614938f670b2edaf7e4ddf5efccf9e64c460cc33c339bb997c1f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b54eacd74bf5dfa5b438ea9707decc01e32a167826b995c337496d999ad63a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:43Z\\\",\\\"message\\\":\\\"/factory.go:141\\\\nI0218 00:34:43.501291 6179 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:43.501356 6179 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:43.501345 6179 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 00:34:43.501378 6179 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 00:34:43.501406 6179 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 00:34:43.501417 6179 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 00:34:43.501458 6179 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 00:34:43.501480 6179 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 00:34:43.501523 6179 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 00:34:43.501551 6179 factory.go:656] Stopping watch factory\\\\nI0218 00:34:43.501576 6179 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 00:34:43.501601 6179 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 00:34:43.501623 6179 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 00:34:43.501639 6179 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 00:34:43.501725 6179 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://278932ec03614938f670b2edaf7e4ddf5efccf9e64c460cc33c339bb997c1f87\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"message\\\":\\\"0:34:44.854079 6322 services_controller.go:356] Processing sync for service openshift-etcd-operator/metrics for network=default\\\\nI0218 00:34:44.854032 6322 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 00:34:44.854094 6322 services_controller.go:434] Service openshift-etcd-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-etcd-operator ff1da138-ae82-4792-ae1f-3b2df1427723 4289 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:etcd-operator] map[include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:etcd-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00786e317 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:45Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.936730 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:45Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.949393 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.949446 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.949460 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.949480 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.949519 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:45Z","lastTransitionTime":"2026-02-18T00:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.950865 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:45Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.965582 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:45Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.980597 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:45Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:45 crc kubenswrapper[4858]: I0218 00:34:45.998587 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:45Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.001593 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4c37420e-6ee9-4827-be9c-060d919663b0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gnnml\" (UID: \"4c37420e-6ee9-4827-be9c-060d919663b0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.001662 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4c37420e-6ee9-4827-be9c-060d919663b0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gnnml\" (UID: \"4c37420e-6ee9-4827-be9c-060d919663b0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.001698 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4c37420e-6ee9-4827-be9c-060d919663b0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gnnml\" (UID: \"4c37420e-6ee9-4827-be9c-060d919663b0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.001813 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvmh6\" (UniqueName: \"kubernetes.io/projected/4c37420e-6ee9-4827-be9c-060d919663b0-kube-api-access-kvmh6\") pod \"ovnkube-control-plane-749d76644c-gnnml\" (UID: \"4c37420e-6ee9-4827-be9c-060d919663b0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.014569 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.029475 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.041696 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.052404 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.052475 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.052537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.052563 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.052580 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:46Z","lastTransitionTime":"2026-02-18T00:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.063965 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0415bdcb0566e3795b0c5796437e9d0c35761f6836fab59e2de8a8448143c75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.083333 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.101037 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.102368 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4c37420e-6ee9-4827-be9c-060d919663b0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gnnml\" (UID: \"4c37420e-6ee9-4827-be9c-060d919663b0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.102429 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4c37420e-6ee9-4827-be9c-060d919663b0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gnnml\" (UID: \"4c37420e-6ee9-4827-be9c-060d919663b0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.102478 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvmh6\" (UniqueName: \"kubernetes.io/projected/4c37420e-6ee9-4827-be9c-060d919663b0-kube-api-access-kvmh6\") pod \"ovnkube-control-plane-749d76644c-gnnml\" (UID: \"4c37420e-6ee9-4827-be9c-060d919663b0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.102629 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4c37420e-6ee9-4827-be9c-060d919663b0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gnnml\" (UID: \"4c37420e-6ee9-4827-be9c-060d919663b0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.103634 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4c37420e-6ee9-4827-be9c-060d919663b0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gnnml\" (UID: \"4c37420e-6ee9-4827-be9c-060d919663b0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.103794 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4c37420e-6ee9-4827-be9c-060d919663b0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gnnml\" (UID: \"4c37420e-6ee9-4827-be9c-060d919663b0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.111453 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4c37420e-6ee9-4827-be9c-060d919663b0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gnnml\" (UID: \"4c37420e-6ee9-4827-be9c-060d919663b0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.128444 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvmh6\" (UniqueName: \"kubernetes.io/projected/4c37420e-6ee9-4827-be9c-060d919663b0-kube-api-access-kvmh6\") pod \"ovnkube-control-plane-749d76644c-gnnml\" (UID: \"4c37420e-6ee9-4827-be9c-060d919663b0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.137063 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://278932ec03614938f670b2edaf7e4ddf5efccf9e64c460cc33c339bb997c1f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b54eacd74bf5dfa5b438ea9707decc01e32a167826b995c337496d999ad63a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:43Z\\\",\\\"message\\\":\\\"/factory.go:141\\\\nI0218 00:34:43.501291 6179 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:43.501356 6179 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:43.501345 6179 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 00:34:43.501378 6179 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 00:34:43.501406 6179 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 00:34:43.501417 6179 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 00:34:43.501458 6179 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 00:34:43.501480 6179 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 00:34:43.501523 6179 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 00:34:43.501551 6179 factory.go:656] Stopping watch factory\\\\nI0218 00:34:43.501576 6179 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 00:34:43.501601 6179 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 00:34:43.501623 6179 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 00:34:43.501639 6179 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 00:34:43.501725 6179 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://278932ec03614938f670b2edaf7e4ddf5efccf9e64c460cc33c339bb997c1f87\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"message\\\":\\\"0:34:44.854079 6322 services_controller.go:356] Processing sync for service openshift-etcd-operator/metrics for network=default\\\\nI0218 00:34:44.854032 6322 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 00:34:44.854094 6322 services_controller.go:434] Service openshift-etcd-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-etcd-operator ff1da138-ae82-4792-ae1f-3b2df1427723 4289 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:etcd-operator] map[include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:etcd-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00786e317 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.154367 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.155678 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.155709 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.155720 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.155736 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.155747 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:46Z","lastTransitionTime":"2026-02-18T00:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.170145 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.183113 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.197547 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c37420e-6ee9-4827-be9c-060d919663b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gnnml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.208302 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.217016 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: W0218 00:34:46.229352 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c37420e_6ee9_4827_be9c_060d919663b0.slice/crio-5daf84cee1407c6ae77c5ed79615617aba4f23def9f7afd0ca107ae2d739fb88 WatchSource:0}: Error finding container 5daf84cee1407c6ae77c5ed79615617aba4f23def9f7afd0ca107ae2d739fb88: Status 404 returned error can't find the container with id 5daf84cee1407c6ae77c5ed79615617aba4f23def9f7afd0ca107ae2d739fb88 Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.239128 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.258118 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.258169 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.258186 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.258209 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.258226 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:46Z","lastTransitionTime":"2026-02-18T00:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.262353 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.283913 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.361547 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.361581 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.361589 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.361602 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.361611 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:46Z","lastTransitionTime":"2026-02-18T00:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.383145 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 06:22:29.175739172 +0000 UTC Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.464371 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.464427 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.464446 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.464470 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.464487 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:46Z","lastTransitionTime":"2026-02-18T00:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.567268 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.567320 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.567338 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.567362 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.567378 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:46Z","lastTransitionTime":"2026-02-18T00:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.670254 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.670330 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.670348 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.670373 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.670392 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:46Z","lastTransitionTime":"2026-02-18T00:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.759376 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jjq7k_62c71780-47e7-4e14-9b93-60050f6f3141/ovnkube-controller/1.log" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.766654 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" event={"ID":"4c37420e-6ee9-4827-be9c-060d919663b0","Type":"ContainerStarted","Data":"b5ce045b0b2d956d2ceae85fe58a0283e31d62b99a2618add17e477c8ebce86d"} Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.766720 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" event={"ID":"4c37420e-6ee9-4827-be9c-060d919663b0","Type":"ContainerStarted","Data":"99103e5e72463f14ac12cb5d83f4ebdec29515a3d5082fb46a14b6152bb669f4"} Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.766745 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" event={"ID":"4c37420e-6ee9-4827-be9c-060d919663b0","Type":"ContainerStarted","Data":"5daf84cee1407c6ae77c5ed79615617aba4f23def9f7afd0ca107ae2d739fb88"} Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.773011 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.773048 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.773059 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.773077 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.773088 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:46Z","lastTransitionTime":"2026-02-18T00:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.785699 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.800016 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.821849 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c37420e-6ee9-4827-be9c-060d919663b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99103e5e72463f14ac12cb5d83f4ebdec29515a3d5082fb46a14b6152bb669f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5ce045b0b2d956d2ceae85fe58a0283e31d62b99a2618add17e477c8ebce86d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gnnml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.836375 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.849465 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.869769 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.875716 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.875754 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.875767 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.875787 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.875802 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:46Z","lastTransitionTime":"2026-02-18T00:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.890196 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.911176 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.930675 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.946585 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.970614 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0415bdcb0566e3795b0c5796437e9d0c35761f6836fab59e2de8a8448143c75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.979185 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.979232 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.979248 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.979273 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:46 crc kubenswrapper[4858]: I0218 00:34:46.979290 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:46Z","lastTransitionTime":"2026-02-18T00:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.009166 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.039756 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.051922 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.069834 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://278932ec03614938f670b2edaf7e4ddf5efccf9e64c460cc33c339bb997c1f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b54eacd74bf5dfa5b438ea9707decc01e32a167826b995c337496d999ad63a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:43Z\\\",\\\"message\\\":\\\"/factory.go:141\\\\nI0218 00:34:43.501291 6179 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:43.501356 6179 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:43.501345 6179 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 00:34:43.501378 6179 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 00:34:43.501406 6179 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 00:34:43.501417 6179 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 00:34:43.501458 6179 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 00:34:43.501480 6179 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 00:34:43.501523 6179 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 00:34:43.501551 6179 factory.go:656] Stopping watch factory\\\\nI0218 00:34:43.501576 6179 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 00:34:43.501601 6179 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 00:34:43.501623 6179 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 00:34:43.501639 6179 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 00:34:43.501725 6179 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://278932ec03614938f670b2edaf7e4ddf5efccf9e64c460cc33c339bb997c1f87\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"message\\\":\\\"0:34:44.854079 6322 services_controller.go:356] Processing sync for service openshift-etcd-operator/metrics for network=default\\\\nI0218 00:34:44.854032 6322 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 00:34:44.854094 6322 services_controller.go:434] Service openshift-etcd-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-etcd-operator ff1da138-ae82-4792-ae1f-3b2df1427723 4289 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:etcd-operator] map[include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:etcd-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00786e317 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.081305 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.081356 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.081370 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.081388 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.081400 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:47Z","lastTransitionTime":"2026-02-18T00:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.147650 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.147723 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.147753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.147781 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.147799 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:47Z","lastTransitionTime":"2026-02-18T00:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:47 crc kubenswrapper[4858]: E0218 00:34:47.168237 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.173861 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.173919 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.173936 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.173961 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.173979 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:47Z","lastTransitionTime":"2026-02-18T00:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:47 crc kubenswrapper[4858]: E0218 00:34:47.193398 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.198170 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.198236 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.198254 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.198276 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.198294 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:47Z","lastTransitionTime":"2026-02-18T00:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:47 crc kubenswrapper[4858]: E0218 00:34:47.218601 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.223234 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.223285 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.223303 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.223323 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.223339 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:47Z","lastTransitionTime":"2026-02-18T00:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:47 crc kubenswrapper[4858]: E0218 00:34:47.242316 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.247977 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.248030 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.248050 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.248076 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.248094 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:47Z","lastTransitionTime":"2026-02-18T00:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:47 crc kubenswrapper[4858]: E0218 00:34:47.270078 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: E0218 00:34:47.270304 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.272458 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.272531 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.272549 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.272572 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.272589 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:47Z","lastTransitionTime":"2026-02-18T00:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.375749 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.375840 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.375867 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.375935 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.375961 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:47Z","lastTransitionTime":"2026-02-18T00:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.384222 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 11:12:17.080528749 +0000 UTC Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.419153 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.419199 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:34:47 crc kubenswrapper[4858]: E0218 00:34:47.419385 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.419391 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:34:47 crc kubenswrapper[4858]: E0218 00:34:47.419563 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:34:47 crc kubenswrapper[4858]: E0218 00:34:47.419740 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.438027 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c37420e-6ee9-4827-be9c-060d919663b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99103e5e72463f14ac12cb5d83f4ebdec29515a3d5082fb46a14b6152bb669f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5ce045b0b2d956d2ceae85fe58a0283e31d62b99a2618add17e477c8ebce86d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gnnml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.456101 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.474413 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.478747 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.478814 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.478837 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.478865 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.478882 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:47Z","lastTransitionTime":"2026-02-18T00:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.495616 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.519723 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.540168 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.558855 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.577376 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.581893 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.581946 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.581962 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.581983 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.581998 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:47Z","lastTransitionTime":"2026-02-18T00:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.602599 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0415bdcb0566e3795b0c5796437e9d0c35761f6836fab59e2de8a8448143c75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.625408 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.646420 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.679744 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://278932ec03614938f670b2edaf7e4ddf5efccf9e64c460cc33c339bb997c1f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b54eacd74bf5dfa5b438ea9707decc01e32a167826b995c337496d999ad63a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:43Z\\\",\\\"message\\\":\\\"/factory.go:141\\\\nI0218 00:34:43.501291 6179 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:43.501356 6179 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:43.501345 6179 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 00:34:43.501378 6179 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 00:34:43.501406 6179 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 00:34:43.501417 6179 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 00:34:43.501458 6179 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 00:34:43.501480 6179 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 00:34:43.501523 6179 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 00:34:43.501551 6179 factory.go:656] Stopping watch factory\\\\nI0218 00:34:43.501576 6179 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 00:34:43.501601 6179 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 00:34:43.501623 6179 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 00:34:43.501639 6179 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 00:34:43.501725 6179 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://278932ec03614938f670b2edaf7e4ddf5efccf9e64c460cc33c339bb997c1f87\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"message\\\":\\\"0:34:44.854079 6322 services_controller.go:356] Processing sync for service openshift-etcd-operator/metrics for network=default\\\\nI0218 00:34:44.854032 6322 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 00:34:44.854094 6322 services_controller.go:434] Service openshift-etcd-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-etcd-operator ff1da138-ae82-4792-ae1f-3b2df1427723 4289 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:etcd-operator] map[include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:etcd-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00786e317 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.686824 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.686886 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.686904 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.686930 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.686949 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:47Z","lastTransitionTime":"2026-02-18T00:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.702450 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.722918 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.741628 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.789669 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.789719 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.789734 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.789751 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.789763 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:47Z","lastTransitionTime":"2026-02-18T00:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.793560 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-jbdlz"] Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.794299 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:34:47 crc kubenswrapper[4858]: E0218 00:34:47.794395 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.811538 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.827247 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.851560 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0415bdcb0566e3795b0c5796437e9d0c35761f6836fab59e2de8a8448143c75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.874627 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.892908 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.892946 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.892958 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.892976 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.893005 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:47Z","lastTransitionTime":"2026-02-18T00:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.894855 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.919583 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://278932ec03614938f670b2edaf7e4ddf5efccf9e64c460cc33c339bb997c1f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b54eacd74bf5dfa5b438ea9707decc01e32a167826b995c337496d999ad63a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:43Z\\\",\\\"message\\\":\\\"/factory.go:141\\\\nI0218 00:34:43.501291 6179 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:43.501356 6179 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:43.501345 6179 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 00:34:43.501378 6179 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 00:34:43.501406 6179 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 00:34:43.501417 6179 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 00:34:43.501458 6179 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 00:34:43.501480 6179 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 00:34:43.501523 6179 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 00:34:43.501551 6179 factory.go:656] Stopping watch factory\\\\nI0218 00:34:43.501576 6179 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 00:34:43.501601 6179 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 00:34:43.501623 6179 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 00:34:43.501639 6179 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 00:34:43.501725 6179 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://278932ec03614938f670b2edaf7e4ddf5efccf9e64c460cc33c339bb997c1f87\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"message\\\":\\\"0:34:44.854079 6322 services_controller.go:356] Processing sync for service openshift-etcd-operator/metrics for network=default\\\\nI0218 00:34:44.854032 6322 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 00:34:44.854094 6322 services_controller.go:434] Service openshift-etcd-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-etcd-operator ff1da138-ae82-4792-ae1f-3b2df1427723 4289 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:etcd-operator] map[include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:etcd-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00786e317 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.921937 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs\") pod \"network-metrics-daemon-jbdlz\" (UID: \"7064635a-c927-4499-98ce-76833fb5801c\") " pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.921994 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v8lq\" (UniqueName: \"kubernetes.io/projected/7064635a-c927-4499-98ce-76833fb5801c-kube-api-access-2v8lq\") pod \"network-metrics-daemon-jbdlz\" (UID: \"7064635a-c927-4499-98ce-76833fb5801c\") " pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.940372 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.963381 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.983420 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.995524 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.995593 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.995611 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.995670 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:47 crc kubenswrapper[4858]: I0218 00:34:47.995692 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:47Z","lastTransitionTime":"2026-02-18T00:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.004295 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c37420e-6ee9-4827-be9c-060d919663b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99103e5e72463f14ac12cb5d83f4ebdec29515a3d5082fb46a14b6152bb669f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5ce045b0b2d956d2ceae85fe58a0283e31d62b99a2618add17e477c8ebce86d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gnnml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.021040 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.022830 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs\") pod \"network-metrics-daemon-jbdlz\" (UID: \"7064635a-c927-4499-98ce-76833fb5801c\") " pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.022896 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v8lq\" (UniqueName: \"kubernetes.io/projected/7064635a-c927-4499-98ce-76833fb5801c-kube-api-access-2v8lq\") pod \"network-metrics-daemon-jbdlz\" (UID: \"7064635a-c927-4499-98ce-76833fb5801c\") " pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:34:48 crc kubenswrapper[4858]: E0218 00:34:48.023098 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:34:48 crc kubenswrapper[4858]: E0218 00:34:48.023215 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs podName:7064635a-c927-4499-98ce-76833fb5801c nodeName:}" failed. No retries permitted until 2026-02-18 00:34:48.523184662 +0000 UTC m=+41.829021434 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs") pod "network-metrics-daemon-jbdlz" (UID: "7064635a-c927-4499-98ce-76833fb5801c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.036957 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.052123 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v8lq\" (UniqueName: \"kubernetes.io/projected/7064635a-c927-4499-98ce-76833fb5801c-kube-api-access-2v8lq\") pod \"network-metrics-daemon-jbdlz\" (UID: \"7064635a-c927-4499-98ce-76833fb5801c\") " pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.057106 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.076190 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbdlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7064635a-c927-4499-98ce-76833fb5801c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbdlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.098464 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.098580 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.098601 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.098626 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.098643 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:48Z","lastTransitionTime":"2026-02-18T00:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.099117 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.120333 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.201067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.201130 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.201148 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.201171 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.201189 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:48Z","lastTransitionTime":"2026-02-18T00:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.304693 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.304750 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.304775 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.304807 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.304829 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:48Z","lastTransitionTime":"2026-02-18T00:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.384781 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 14:21:31.322341429 +0000 UTC Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.408344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.408418 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.408443 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.408472 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.408528 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:48Z","lastTransitionTime":"2026-02-18T00:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.512068 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.512115 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.512130 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.512152 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.512167 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:48Z","lastTransitionTime":"2026-02-18T00:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.528277 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs\") pod \"network-metrics-daemon-jbdlz\" (UID: \"7064635a-c927-4499-98ce-76833fb5801c\") " pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:34:48 crc kubenswrapper[4858]: E0218 00:34:48.528538 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:34:48 crc kubenswrapper[4858]: E0218 00:34:48.528701 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs podName:7064635a-c927-4499-98ce-76833fb5801c nodeName:}" failed. No retries permitted until 2026-02-18 00:34:49.528664423 +0000 UTC m=+42.834501195 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs") pod "network-metrics-daemon-jbdlz" (UID: "7064635a-c927-4499-98ce-76833fb5801c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.615372 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.615446 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.615469 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.615525 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.615585 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:48Z","lastTransitionTime":"2026-02-18T00:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.718541 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.718617 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.718634 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.718657 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.718710 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:48Z","lastTransitionTime":"2026-02-18T00:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.820929 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.820984 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.821003 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.821023 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.821038 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:48Z","lastTransitionTime":"2026-02-18T00:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.924982 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.925093 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.925111 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.925134 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:48 crc kubenswrapper[4858]: I0218 00:34:48.925151 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:48Z","lastTransitionTime":"2026-02-18T00:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.028103 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.028177 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.028199 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.028230 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.028253 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:49Z","lastTransitionTime":"2026-02-18T00:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.131440 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.131533 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.131552 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.131576 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.131594 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:49Z","lastTransitionTime":"2026-02-18T00:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.235271 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.235392 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.235430 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.235462 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.235479 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:49Z","lastTransitionTime":"2026-02-18T00:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.337926 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.337989 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.338008 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.338034 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.338052 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:49Z","lastTransitionTime":"2026-02-18T00:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.385453 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 00:53:33.070223108 +0000 UTC Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.419242 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.419359 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.419381 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:34:49 crc kubenswrapper[4858]: E0218 00:34:49.419577 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.419627 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:34:49 crc kubenswrapper[4858]: E0218 00:34:49.419825 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:34:49 crc kubenswrapper[4858]: E0218 00:34:49.419997 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:34:49 crc kubenswrapper[4858]: E0218 00:34:49.420085 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.441094 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.441427 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.441684 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.441937 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.442148 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:49Z","lastTransitionTime":"2026-02-18T00:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.540302 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs\") pod \"network-metrics-daemon-jbdlz\" (UID: \"7064635a-c927-4499-98ce-76833fb5801c\") " pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:34:49 crc kubenswrapper[4858]: E0218 00:34:49.540548 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:34:49 crc kubenswrapper[4858]: E0218 00:34:49.540670 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs podName:7064635a-c927-4499-98ce-76833fb5801c nodeName:}" failed. No retries permitted until 2026-02-18 00:34:51.540640048 +0000 UTC m=+44.846476820 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs") pod "network-metrics-daemon-jbdlz" (UID: "7064635a-c927-4499-98ce-76833fb5801c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.544723 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.544770 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.544787 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.544813 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.544829 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:49Z","lastTransitionTime":"2026-02-18T00:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.646950 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.647021 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.647038 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.647062 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.647080 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:49Z","lastTransitionTime":"2026-02-18T00:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.750068 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.750143 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.750168 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.750202 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.750227 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:49Z","lastTransitionTime":"2026-02-18T00:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.853081 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.853122 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.853134 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.853151 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.853164 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:49Z","lastTransitionTime":"2026-02-18T00:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.956125 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.956195 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.956213 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.956239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:49 crc kubenswrapper[4858]: I0218 00:34:49.956256 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:49Z","lastTransitionTime":"2026-02-18T00:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.059292 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.059355 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.059382 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.059410 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.059430 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:50Z","lastTransitionTime":"2026-02-18T00:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.162654 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.162710 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.162727 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.162749 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.162766 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:50Z","lastTransitionTime":"2026-02-18T00:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.266077 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.266139 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.266161 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.266186 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.266204 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:50Z","lastTransitionTime":"2026-02-18T00:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.368915 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.368970 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.368986 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.369011 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.369028 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:50Z","lastTransitionTime":"2026-02-18T00:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.385757 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 13:32:03.169578881 +0000 UTC Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.472315 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.472379 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.472404 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.472434 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.472457 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:50Z","lastTransitionTime":"2026-02-18T00:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.575591 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.575665 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.575751 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.575783 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.575809 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:50Z","lastTransitionTime":"2026-02-18T00:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.678712 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.678775 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.678800 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.678829 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.678865 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:50Z","lastTransitionTime":"2026-02-18T00:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.781235 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.781296 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.781313 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.781336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.781352 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:50Z","lastTransitionTime":"2026-02-18T00:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.883547 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.883576 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.883584 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.883597 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.883606 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:50Z","lastTransitionTime":"2026-02-18T00:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.985772 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.985817 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.985828 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.985844 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:50 crc kubenswrapper[4858]: I0218 00:34:50.985856 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:50Z","lastTransitionTime":"2026-02-18T00:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.087853 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.087903 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.087934 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.087960 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.087979 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:51Z","lastTransitionTime":"2026-02-18T00:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.189947 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.189980 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.189992 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.190007 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.190037 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:51Z","lastTransitionTime":"2026-02-18T00:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.296383 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.296633 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.296698 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.296762 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.296835 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:51Z","lastTransitionTime":"2026-02-18T00:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.386226 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 15:55:08.201350531 +0000 UTC Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.398803 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.398829 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.398839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.398852 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.398861 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:51Z","lastTransitionTime":"2026-02-18T00:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.419184 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:34:51 crc kubenswrapper[4858]: E0218 00:34:51.419289 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.419338 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:34:51 crc kubenswrapper[4858]: E0218 00:34:51.419491 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.419551 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:51 crc kubenswrapper[4858]: E0218 00:34:51.419722 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.419775 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:34:51 crc kubenswrapper[4858]: E0218 00:34:51.419877 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.501651 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.501719 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.501740 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.501768 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.501788 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:51Z","lastTransitionTime":"2026-02-18T00:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.562411 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs\") pod \"network-metrics-daemon-jbdlz\" (UID: \"7064635a-c927-4499-98ce-76833fb5801c\") " pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:34:51 crc kubenswrapper[4858]: E0218 00:34:51.562667 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:34:51 crc kubenswrapper[4858]: E0218 00:34:51.563124 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs podName:7064635a-c927-4499-98ce-76833fb5801c nodeName:}" failed. No retries permitted until 2026-02-18 00:34:55.563089604 +0000 UTC m=+48.868926376 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs") pod "network-metrics-daemon-jbdlz" (UID: "7064635a-c927-4499-98ce-76833fb5801c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.604527 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.604576 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.604592 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.604616 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.604633 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:51Z","lastTransitionTime":"2026-02-18T00:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.707200 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.707248 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.707263 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.707281 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.707296 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:51Z","lastTransitionTime":"2026-02-18T00:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.809964 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.810003 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.810016 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.810031 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.810042 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:51Z","lastTransitionTime":"2026-02-18T00:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.912383 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.912451 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.912473 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.912547 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:51 crc kubenswrapper[4858]: I0218 00:34:51.912566 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:51Z","lastTransitionTime":"2026-02-18T00:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.015123 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.015153 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.015162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.015174 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.015186 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:52Z","lastTransitionTime":"2026-02-18T00:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.117786 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.117836 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.117850 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.117901 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.117917 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:52Z","lastTransitionTime":"2026-02-18T00:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.219683 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.219716 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.219729 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.219744 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.219754 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:52Z","lastTransitionTime":"2026-02-18T00:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.321806 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.321865 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.321882 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.321906 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.321923 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:52Z","lastTransitionTime":"2026-02-18T00:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.387290 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 23:07:51.639706347 +0000 UTC Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.424521 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.424553 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.424561 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.424573 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.424581 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:52Z","lastTransitionTime":"2026-02-18T00:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.527757 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.527828 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.527851 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.527880 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.527902 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:52Z","lastTransitionTime":"2026-02-18T00:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.631137 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.631193 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.631208 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.631230 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.631243 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:52Z","lastTransitionTime":"2026-02-18T00:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.733366 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.733416 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.733455 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.733475 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.733529 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:52Z","lastTransitionTime":"2026-02-18T00:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.836135 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.836229 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.836301 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.836335 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.836425 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:52Z","lastTransitionTime":"2026-02-18T00:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.939573 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.939639 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.939661 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.939689 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:52 crc kubenswrapper[4858]: I0218 00:34:52.939713 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:52Z","lastTransitionTime":"2026-02-18T00:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.043002 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.043105 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.043130 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.043210 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.043236 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:53Z","lastTransitionTime":"2026-02-18T00:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.145848 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.145894 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.145911 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.145933 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.145954 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:53Z","lastTransitionTime":"2026-02-18T00:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.249368 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.249761 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.249792 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.249823 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.249844 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:53Z","lastTransitionTime":"2026-02-18T00:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.354321 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.354400 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.354428 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.354456 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.354533 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:53Z","lastTransitionTime":"2026-02-18T00:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.388407 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 23:29:11.072053146 +0000 UTC Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.419119 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.419185 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.419179 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.419129 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:34:53 crc kubenswrapper[4858]: E0218 00:34:53.419396 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:34:53 crc kubenswrapper[4858]: E0218 00:34:53.419684 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:34:53 crc kubenswrapper[4858]: E0218 00:34:53.419794 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:34:53 crc kubenswrapper[4858]: E0218 00:34:53.419904 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.456876 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.456920 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.456931 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.456950 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.456961 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:53Z","lastTransitionTime":"2026-02-18T00:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.560049 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.560188 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.560213 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.560237 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.560299 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:53Z","lastTransitionTime":"2026-02-18T00:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.663026 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.663116 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.663145 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.663220 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.663250 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:53Z","lastTransitionTime":"2026-02-18T00:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.765922 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.765979 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.765996 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.766020 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.766174 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:53Z","lastTransitionTime":"2026-02-18T00:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.868825 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.868890 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.868902 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.868917 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.868928 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:53Z","lastTransitionTime":"2026-02-18T00:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.972412 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.972470 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.972487 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.972591 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:53 crc kubenswrapper[4858]: I0218 00:34:53.972611 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:53Z","lastTransitionTime":"2026-02-18T00:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.075753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.075835 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.075848 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.075871 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.075886 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:54Z","lastTransitionTime":"2026-02-18T00:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.178620 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.178687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.178705 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.178730 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.178750 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:54Z","lastTransitionTime":"2026-02-18T00:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.281601 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.281653 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.281669 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.281691 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.281707 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:54Z","lastTransitionTime":"2026-02-18T00:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.385536 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.385612 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.385631 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.385658 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.385675 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:54Z","lastTransitionTime":"2026-02-18T00:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.388775 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 19:44:31.341247257 +0000 UTC Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.489149 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.489203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.489219 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.489243 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.489259 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:54Z","lastTransitionTime":"2026-02-18T00:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.592320 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.592683 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.592779 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.592872 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.592962 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:54Z","lastTransitionTime":"2026-02-18T00:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.696056 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.696106 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.696125 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.696151 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.696170 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:54Z","lastTransitionTime":"2026-02-18T00:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.798841 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.798888 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.798903 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.798922 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.798937 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:54Z","lastTransitionTime":"2026-02-18T00:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.901965 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.902401 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.902668 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.902911 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:54 crc kubenswrapper[4858]: I0218 00:34:54.903107 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:54Z","lastTransitionTime":"2026-02-18T00:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.006145 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.006213 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.006231 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.006253 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.006270 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:55Z","lastTransitionTime":"2026-02-18T00:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.110300 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.110383 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.110411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.110446 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.110471 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:55Z","lastTransitionTime":"2026-02-18T00:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.213642 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.213719 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.213742 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.213771 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.213793 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:55Z","lastTransitionTime":"2026-02-18T00:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.316870 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.316927 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.316945 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.316968 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.316991 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:55Z","lastTransitionTime":"2026-02-18T00:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.389146 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 16:52:59.061717617 +0000 UTC Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.418405 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.418435 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.418558 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:55 crc kubenswrapper[4858]: E0218 00:34:55.418722 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.418789 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:34:55 crc kubenswrapper[4858]: E0218 00:34:55.419076 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:34:55 crc kubenswrapper[4858]: E0218 00:34:55.419247 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:34:55 crc kubenswrapper[4858]: E0218 00:34:55.419378 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.423834 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.423891 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.423911 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.423934 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.423952 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:55Z","lastTransitionTime":"2026-02-18T00:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.527755 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.527815 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.527834 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.527860 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.527878 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:55Z","lastTransitionTime":"2026-02-18T00:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.604712 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs\") pod \"network-metrics-daemon-jbdlz\" (UID: \"7064635a-c927-4499-98ce-76833fb5801c\") " pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:34:55 crc kubenswrapper[4858]: E0218 00:34:55.605223 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:34:55 crc kubenswrapper[4858]: E0218 00:34:55.605448 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs podName:7064635a-c927-4499-98ce-76833fb5801c nodeName:}" failed. No retries permitted until 2026-02-18 00:35:03.605419652 +0000 UTC m=+56.911256414 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs") pod "network-metrics-daemon-jbdlz" (UID: "7064635a-c927-4499-98ce-76833fb5801c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.630866 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.631219 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.631424 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.631637 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.631775 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:55Z","lastTransitionTime":"2026-02-18T00:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.734676 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.734733 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.734751 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.734775 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.734792 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:55Z","lastTransitionTime":"2026-02-18T00:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.838230 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.838290 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.838309 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.838335 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.838353 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:55Z","lastTransitionTime":"2026-02-18T00:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.942066 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.942142 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.942167 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.942196 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:55 crc kubenswrapper[4858]: I0218 00:34:55.942222 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:55Z","lastTransitionTime":"2026-02-18T00:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.046117 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.046185 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.046203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.046227 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.046243 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:56Z","lastTransitionTime":"2026-02-18T00:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.149478 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.149535 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.149544 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.149559 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.149568 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:56Z","lastTransitionTime":"2026-02-18T00:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.252993 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.253063 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.253087 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.253116 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.253137 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:56Z","lastTransitionTime":"2026-02-18T00:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.356204 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.356256 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.356276 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.356301 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.356320 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:56Z","lastTransitionTime":"2026-02-18T00:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.390189 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 17:13:46.978269061 +0000 UTC Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.459756 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.459814 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.459831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.459867 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.459907 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:56Z","lastTransitionTime":"2026-02-18T00:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.563060 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.563449 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.563637 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.563783 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.563936 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:56Z","lastTransitionTime":"2026-02-18T00:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.667067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.667176 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.667194 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.667224 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.667244 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:56Z","lastTransitionTime":"2026-02-18T00:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.770818 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.770892 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.770917 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.770947 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.770965 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:56Z","lastTransitionTime":"2026-02-18T00:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.874450 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.874553 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.874571 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.874604 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.874626 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:56Z","lastTransitionTime":"2026-02-18T00:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.977909 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.977949 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.977969 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.977992 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:56 crc kubenswrapper[4858]: I0218 00:34:56.978009 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:56Z","lastTransitionTime":"2026-02-18T00:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.081402 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.081465 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.081482 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.081537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.081556 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:57Z","lastTransitionTime":"2026-02-18T00:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.184831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.184890 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.184910 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.184933 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.184953 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:57Z","lastTransitionTime":"2026-02-18T00:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.288969 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.289032 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.289049 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.289114 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.289136 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:57Z","lastTransitionTime":"2026-02-18T00:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.390422 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 22:37:43.648708188 +0000 UTC Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.392762 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.392817 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.392835 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.392859 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.392877 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:57Z","lastTransitionTime":"2026-02-18T00:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.419584 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.419715 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.419776 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.419805 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:34:57 crc kubenswrapper[4858]: E0218 00:34:57.420065 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:34:57 crc kubenswrapper[4858]: E0218 00:34:57.420266 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:34:57 crc kubenswrapper[4858]: E0218 00:34:57.420455 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:34:57 crc kubenswrapper[4858]: E0218 00:34:57.420752 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.439832 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c37420e-6ee9-4827-be9c-060d919663b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99103e5e72463f14ac12cb5d83f4ebdec29515a3d5082fb46a14b6152bb669f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5ce045b0b2d956d2ceae85fe58a0283e31d62b99a2618add17e477c8ebce86d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gnnml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.459903 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.478462 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.490661 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.490708 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.490730 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.490755 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.490773 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:57Z","lastTransitionTime":"2026-02-18T00:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.498999 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:57 crc kubenswrapper[4858]: E0218 00:34:57.506872 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.513255 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.513339 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.513358 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.513382 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.513398 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:57Z","lastTransitionTime":"2026-02-18T00:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.515553 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbdlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7064635a-c927-4499-98ce-76833fb5801c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbdlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:57 crc kubenswrapper[4858]: E0218 00:34:57.533839 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.536592 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.538520 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.538722 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.538773 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.538904 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.538941 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:57Z","lastTransitionTime":"2026-02-18T00:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.553776 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:57 crc kubenswrapper[4858]: E0218 00:34:57.562146 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.565901 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.565965 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.565989 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.566018 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.566042 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:57Z","lastTransitionTime":"2026-02-18T00:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.571916 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:57 crc kubenswrapper[4858]: E0218 00:34:57.582908 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.586189 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.587664 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.587718 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.587732 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.587749 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.587760 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:57Z","lastTransitionTime":"2026-02-18T00:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:57 crc kubenswrapper[4858]: E0218 00:34:57.605365 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:57 crc kubenswrapper[4858]: E0218 00:34:57.605603 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.608260 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.608354 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.608384 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.608419 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.608446 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:57Z","lastTransitionTime":"2026-02-18T00:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.610200 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0415bdcb0566e3795b0c5796437e9d0c35761f6836fab59e2de8a8448143c75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.629253 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.645410 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.669677 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://278932ec03614938f670b2edaf7e4ddf5efccf9e64c460cc33c339bb997c1f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5b54eacd74bf5dfa5b438ea9707decc01e32a167826b995c337496d999ad63a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:43Z\\\",\\\"message\\\":\\\"/factory.go:141\\\\nI0218 00:34:43.501291 6179 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:43.501356 6179 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:43.501345 6179 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 00:34:43.501378 6179 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 00:34:43.501406 6179 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 00:34:43.501417 6179 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 00:34:43.501458 6179 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 00:34:43.501480 6179 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 00:34:43.501523 6179 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 00:34:43.501551 6179 factory.go:656] Stopping watch factory\\\\nI0218 00:34:43.501576 6179 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 00:34:43.501601 6179 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 00:34:43.501623 6179 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 00:34:43.501639 6179 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 00:34:43.501725 6179 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://278932ec03614938f670b2edaf7e4ddf5efccf9e64c460cc33c339bb997c1f87\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"message\\\":\\\"0:34:44.854079 6322 services_controller.go:356] Processing sync for service openshift-etcd-operator/metrics for network=default\\\\nI0218 00:34:44.854032 6322 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 00:34:44.854094 6322 services_controller.go:434] Service openshift-etcd-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-etcd-operator ff1da138-ae82-4792-ae1f-3b2df1427723 4289 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:etcd-operator] map[include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:etcd-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00786e317 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.686186 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.702639 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.711163 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.711237 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.711263 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.711294 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.711318 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:57Z","lastTransitionTime":"2026-02-18T00:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.720747 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.813717 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.813785 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.813808 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.813836 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.813858 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:57Z","lastTransitionTime":"2026-02-18T00:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.916748 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.916833 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.916854 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.916879 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:57 crc kubenswrapper[4858]: I0218 00:34:57.916896 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:57Z","lastTransitionTime":"2026-02-18T00:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.019459 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.019624 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.019648 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.019672 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.019688 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:58Z","lastTransitionTime":"2026-02-18T00:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.122956 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.123021 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.123039 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.123064 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.123086 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:58Z","lastTransitionTime":"2026-02-18T00:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.225430 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.225472 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.225484 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.225522 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.225538 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:58Z","lastTransitionTime":"2026-02-18T00:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.328377 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.328439 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.328460 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.328490 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.328559 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:58Z","lastTransitionTime":"2026-02-18T00:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.390929 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 03:19:05.312055252 +0000 UTC Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.419648 4858 scope.go:117] "RemoveContainer" containerID="278932ec03614938f670b2edaf7e4ddf5efccf9e64c460cc33c339bb997c1f87" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.430665 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.430700 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.430711 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.430727 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.430739 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:58Z","lastTransitionTime":"2026-02-18T00:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.446013 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.467614 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.485795 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.502320 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbdlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7064635a-c927-4499-98ce-76833fb5801c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbdlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.524396 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.532892 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.532970 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.532990 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.533015 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.533033 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:58Z","lastTransitionTime":"2026-02-18T00:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.538364 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.552968 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.568934 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.590833 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0415bdcb0566e3795b0c5796437e9d0c35761f6836fab59e2de8a8448143c75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.605931 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.620829 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.640253 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.642093 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.642140 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.642154 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.642171 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.642292 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:58Z","lastTransitionTime":"2026-02-18T00:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.669874 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://278932ec03614938f670b2edaf7e4ddf5efccf9e64c460cc33c339bb997c1f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://278932ec03614938f670b2edaf7e4ddf5efccf9e64c460cc33c339bb997c1f87\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"message\\\":\\\"0:34:44.854079 6322 services_controller.go:356] Processing sync for service openshift-etcd-operator/metrics for network=default\\\\nI0218 00:34:44.854032 6322 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 00:34:44.854094 6322 services_controller.go:434] Service openshift-etcd-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-etcd-operator ff1da138-ae82-4792-ae1f-3b2df1427723 4289 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:etcd-operator] map[include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:etcd-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00786e317 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jjq7k_openshift-ovn-kubernetes(62c71780-47e7-4e14-9b93-60050f6f3141)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.689002 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.702251 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.715944 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c37420e-6ee9-4827-be9c-060d919663b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99103e5e72463f14ac12cb5d83f4ebdec29515a3d5082fb46a14b6152bb669f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5ce045b0b2d956d2ceae85fe58a0283e31d62b99a2618add17e477c8ebce86d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gnnml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.744289 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.744348 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.744365 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.744389 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.744406 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:58Z","lastTransitionTime":"2026-02-18T00:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.815089 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jjq7k_62c71780-47e7-4e14-9b93-60050f6f3141/ovnkube-controller/1.log" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.818410 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerStarted","Data":"1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36"} Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.818588 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.840048 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.847608 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.847650 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.847662 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.847680 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.847694 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:58Z","lastTransitionTime":"2026-02-18T00:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.860354 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.875218 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbdlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7064635a-c927-4499-98ce-76833fb5801c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbdlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.899150 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.920686 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.935904 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.949739 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.949779 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.949790 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.949804 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.949813 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:58Z","lastTransitionTime":"2026-02-18T00:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.950728 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.967403 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0415bdcb0566e3795b0c5796437e9d0c35761f6836fab59e2de8a8448143c75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.982727 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:58 crc kubenswrapper[4858]: I0218 00:34:58.997511 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.028935 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://278932ec03614938f670b2edaf7e4ddf5efccf9e64c460cc33c339bb997c1f87\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"message\\\":\\\"0:34:44.854079 6322 services_controller.go:356] Processing sync for service openshift-etcd-operator/metrics for network=default\\\\nI0218 00:34:44.854032 6322 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 00:34:44.854094 6322 services_controller.go:434] Service openshift-etcd-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-etcd-operator ff1da138-ae82-4792-ae1f-3b2df1427723 4289 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:etcd-operator] map[include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:etcd-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00786e317 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.048351 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.052124 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.052222 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.052248 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.052280 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.052304 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:59Z","lastTransitionTime":"2026-02-18T00:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.079570 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.098873 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.116570 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c37420e-6ee9-4827-be9c-060d919663b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99103e5e72463f14ac12cb5d83f4ebdec29515a3d5082fb46a14b6152bb669f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5ce045b0b2d956d2ceae85fe58a0283e31d62b99a2618add17e477c8ebce86d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gnnml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.132598 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.154263 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.154315 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.154332 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.154353 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.154368 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:59Z","lastTransitionTime":"2026-02-18T00:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.242849 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.242968 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.243001 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.243043 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:34:59 crc kubenswrapper[4858]: E0218 00:34:59.243065 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:35:31.243033938 +0000 UTC m=+84.548870680 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.243119 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:59 crc kubenswrapper[4858]: E0218 00:34:59.243156 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:34:59 crc kubenswrapper[4858]: E0218 00:34:59.243177 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:34:59 crc kubenswrapper[4858]: E0218 00:34:59.243178 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:34:59 crc kubenswrapper[4858]: E0218 00:34:59.243232 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:34:59 crc kubenswrapper[4858]: E0218 00:34:59.243258 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:34:59 crc kubenswrapper[4858]: E0218 00:34:59.243265 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:34:59 crc kubenswrapper[4858]: E0218 00:34:59.243284 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:59 crc kubenswrapper[4858]: E0218 00:34:59.243290 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:35:31.243263624 +0000 UTC m=+84.549100386 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:34:59 crc kubenswrapper[4858]: E0218 00:34:59.243190 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:59 crc kubenswrapper[4858]: E0218 00:34:59.243320 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:35:31.243306885 +0000 UTC m=+84.549143657 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:34:59 crc kubenswrapper[4858]: E0218 00:34:59.243357 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 00:35:31.243331965 +0000 UTC m=+84.549168727 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:59 crc kubenswrapper[4858]: E0218 00:34:59.243386 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 00:35:31.243373096 +0000 UTC m=+84.549209868 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.261695 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.261773 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.261791 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.261826 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.261843 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:59Z","lastTransitionTime":"2026-02-18T00:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.364462 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.364579 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.364604 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.364636 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.364659 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:59Z","lastTransitionTime":"2026-02-18T00:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.391722 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 09:04:21.114779163 +0000 UTC Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.419168 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.419246 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.419354 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:34:59 crc kubenswrapper[4858]: E0218 00:34:59.419350 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:34:59 crc kubenswrapper[4858]: E0218 00:34:59.419440 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:34:59 crc kubenswrapper[4858]: E0218 00:34:59.419586 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.419678 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:34:59 crc kubenswrapper[4858]: E0218 00:34:59.419863 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.467834 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.467887 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.467903 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.467925 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.467945 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:59Z","lastTransitionTime":"2026-02-18T00:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.571205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.571570 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.571760 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.571904 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.572031 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:59Z","lastTransitionTime":"2026-02-18T00:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.675133 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.675188 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.675205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.675229 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.675247 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:59Z","lastTransitionTime":"2026-02-18T00:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.778097 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.778137 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.778148 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.778170 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.778183 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:59Z","lastTransitionTime":"2026-02-18T00:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.825895 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jjq7k_62c71780-47e7-4e14-9b93-60050f6f3141/ovnkube-controller/2.log" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.826911 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jjq7k_62c71780-47e7-4e14-9b93-60050f6f3141/ovnkube-controller/1.log" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.841343 4858 generic.go:334] "Generic (PLEG): container finished" podID="62c71780-47e7-4e14-9b93-60050f6f3141" containerID="1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36" exitCode=1 Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.841397 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerDied","Data":"1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36"} Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.841582 4858 scope.go:117] "RemoveContainer" containerID="278932ec03614938f670b2edaf7e4ddf5efccf9e64c460cc33c339bb997c1f87" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.843051 4858 scope.go:117] "RemoveContainer" containerID="1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36" Feb 18 00:34:59 crc kubenswrapper[4858]: E0218 00:34:59.843300 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jjq7k_openshift-ovn-kubernetes(62c71780-47e7-4e14-9b93-60050f6f3141)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.858712 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.869320 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbdlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7064635a-c927-4499-98ce-76833fb5801c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbdlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.879943 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.882606 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.882648 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.882666 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.882690 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.882707 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:59Z","lastTransitionTime":"2026-02-18T00:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.891575 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.902993 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.916704 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.980932 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0415bdcb0566e3795b0c5796437e9d0c35761f6836fab59e2de8a8448143c75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:34:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.985856 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.985939 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.985958 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.985984 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:34:59 crc kubenswrapper[4858]: I0218 00:34:59.986002 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:34:59Z","lastTransitionTime":"2026-02-18T00:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.004944 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.020453 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.052904 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://278932ec03614938f670b2edaf7e4ddf5efccf9e64c460cc33c339bb997c1f87\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"message\\\":\\\"0:34:44.854079 6322 services_controller.go:356] Processing sync for service openshift-etcd-operator/metrics for network=default\\\\nI0218 00:34:44.854032 6322 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 00:34:44.854094 6322 services_controller.go:434] Service openshift-etcd-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-etcd-operator ff1da138-ae82-4792-ae1f-3b2df1427723 4289 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:etcd-operator] map[include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:etcd-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00786e317 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:59Z\\\",\\\"message\\\":\\\"ice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:59.393831 6520 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:34:59.394093 6520 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:59.394197 6520 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 00:34:59.394272 6520 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 00:34:59.394831 6520 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 00:34:59.394905 6520 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 00:34:59.394918 6520 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 00:34:59.394999 6520 factory.go:656] Stopping watch factory\\\\nI0218 00:34:59.395029 6520 ovnkube.go:599] Stopped ovnkube\\\\nI0218 00:34:59.395078 6520 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 00:34:59.395089 6520 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.072794 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.089528 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.089567 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.089587 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.089611 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.089628 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:00Z","lastTransitionTime":"2026-02-18T00:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.095778 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.116649 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.137565 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c37420e-6ee9-4827-be9c-060d919663b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99103e5e72463f14ac12cb5d83f4ebdec29515a3d5082fb46a14b6152bb669f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5ce045b0b2d956d2ceae85fe58a0283e31d62b99a2618add17e477c8ebce86d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gnnml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.159135 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.173221 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.193200 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.193301 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.193324 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.193354 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.193376 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:00Z","lastTransitionTime":"2026-02-18T00:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.296541 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.296600 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.296623 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.296657 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.296676 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:00Z","lastTransitionTime":"2026-02-18T00:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.392429 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 04:09:16.774178919 +0000 UTC Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.399574 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.399627 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.399638 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.399656 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.399667 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:00Z","lastTransitionTime":"2026-02-18T00:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.503230 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.503291 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.503328 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.503354 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.503371 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:00Z","lastTransitionTime":"2026-02-18T00:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.606034 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.606100 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.606117 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.606140 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.606159 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:00Z","lastTransitionTime":"2026-02-18T00:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.709787 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.709861 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.709883 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.709908 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.709926 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:00Z","lastTransitionTime":"2026-02-18T00:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.813622 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.813682 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.813699 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.813723 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.813741 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:00Z","lastTransitionTime":"2026-02-18T00:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.847205 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jjq7k_62c71780-47e7-4e14-9b93-60050f6f3141/ovnkube-controller/2.log" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.916225 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.916281 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.916298 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.916321 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:00 crc kubenswrapper[4858]: I0218 00:35:00.916340 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:00Z","lastTransitionTime":"2026-02-18T00:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.018654 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.018708 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.018728 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.018753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.018770 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:01Z","lastTransitionTime":"2026-02-18T00:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.122262 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.122306 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.122322 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.122345 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.122363 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:01Z","lastTransitionTime":"2026-02-18T00:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.196029 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.208703 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.210292 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.225385 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.225461 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.225475 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.225520 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.225535 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:01Z","lastTransitionTime":"2026-02-18T00:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.226859 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c37420e-6ee9-4827-be9c-060d919663b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99103e5e72463f14ac12cb5d83f4ebdec29515a3d5082fb46a14b6152bb669f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5ce045b0b2d956d2ceae85fe58a0283e31d62b99a2618add17e477c8ebce86d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gnnml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.238937 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.253757 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.270865 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.282692 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbdlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7064635a-c927-4499-98ce-76833fb5801c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbdlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.304418 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.320359 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.336656 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.342260 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.342287 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.342301 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.342320 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.342334 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:01Z","lastTransitionTime":"2026-02-18T00:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.351872 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.376828 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0415bdcb0566e3795b0c5796437e9d0c35761f6836fab59e2de8a8448143c75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.392939 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 13:08:49.644092156 +0000 UTC Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.393948 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.409202 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.418783 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.418885 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.418973 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:01 crc kubenswrapper[4858]: E0218 00:35:01.419147 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.419200 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:01 crc kubenswrapper[4858]: E0218 00:35:01.419393 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:01 crc kubenswrapper[4858]: E0218 00:35:01.419558 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:01 crc kubenswrapper[4858]: E0218 00:35:01.419712 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.440941 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://278932ec03614938f670b2edaf7e4ddf5efccf9e64c460cc33c339bb997c1f87\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"message\\\":\\\"0:34:44.854079 6322 services_controller.go:356] Processing sync for service openshift-etcd-operator/metrics for network=default\\\\nI0218 00:34:44.854032 6322 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 00:34:44.854094 6322 services_controller.go:434] Service openshift-etcd-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-etcd-operator ff1da138-ae82-4792-ae1f-3b2df1427723 4289 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:etcd-operator] map[include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:etcd-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00786e317 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:59Z\\\",\\\"message\\\":\\\"ice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:59.393831 6520 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:34:59.394093 6520 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:59.394197 6520 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 00:34:59.394272 6520 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 00:34:59.394831 6520 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 00:34:59.394905 6520 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 00:34:59.394918 6520 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 00:34:59.394999 6520 factory.go:656] Stopping watch factory\\\\nI0218 00:34:59.395029 6520 ovnkube.go:599] Stopped ovnkube\\\\nI0218 00:34:59.395078 6520 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 00:34:59.395089 6520 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.444994 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.445041 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.445054 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.445078 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.445093 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:01Z","lastTransitionTime":"2026-02-18T00:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.460379 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.477397 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.548886 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.548998 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.549013 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.549039 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.549063 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:01Z","lastTransitionTime":"2026-02-18T00:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.653019 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.653095 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.653113 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.653143 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.653161 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:01Z","lastTransitionTime":"2026-02-18T00:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.757366 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.757455 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.757486 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.757562 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.757601 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:01Z","lastTransitionTime":"2026-02-18T00:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.861191 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.861273 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.861301 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.861349 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.861373 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:01Z","lastTransitionTime":"2026-02-18T00:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.964374 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.964423 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.964436 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.964451 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:01 crc kubenswrapper[4858]: I0218 00:35:01.964463 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:01Z","lastTransitionTime":"2026-02-18T00:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.067449 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.067804 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.067953 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.068188 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.068670 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:02Z","lastTransitionTime":"2026-02-18T00:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.172015 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.172283 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.172345 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.172404 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.172465 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:02Z","lastTransitionTime":"2026-02-18T00:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.275881 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.275950 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.275969 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.275995 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.276012 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:02Z","lastTransitionTime":"2026-02-18T00:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.378937 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.378985 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.379002 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.379027 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.379044 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:02Z","lastTransitionTime":"2026-02-18T00:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.393692 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 18:20:36.94146259 +0000 UTC Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.481936 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.482003 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.482021 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.482044 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.482062 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:02Z","lastTransitionTime":"2026-02-18T00:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.584226 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.585706 4858 scope.go:117] "RemoveContainer" containerID="1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36" Feb 18 00:35:02 crc kubenswrapper[4858]: E0218 00:35:02.585981 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jjq7k_openshift-ovn-kubernetes(62c71780-47e7-4e14-9b93-60050f6f3141)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.586346 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.586397 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.586411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.586431 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.586445 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:02Z","lastTransitionTime":"2026-02-18T00:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.603770 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c1a96ad-b27d-4347-9613-ecbf040905aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2840af40ad5521b99a904966a29d6021b2c6d194521b8a0887e06162ae3ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8652de78b1e8343e7c323df13f080deb9e31fad243abed2d7059161c69843a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bbc915ebeff415c6e1d4888ed16a1eba95ff7420756c40907c71b3652a3a166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5211dd317867170b65bf5493cb95d099a21e9498ccb2d7da269599defbf1f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b5211dd317867170b65bf5493cb95d099a21e9498ccb2d7da269599defbf1f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.619528 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.634125 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.649328 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c37420e-6ee9-4827-be9c-060d919663b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99103e5e72463f14ac12cb5d83f4ebdec29515a3d5082fb46a14b6152bb669f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5ce045b0b2d956d2ceae85fe58a0283e31d62b99a2618add17e477c8ebce86d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gnnml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.665989 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.684004 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.690178 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.690269 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.690328 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.690354 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.690412 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:02Z","lastTransitionTime":"2026-02-18T00:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.705161 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.723041 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbdlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7064635a-c927-4499-98ce-76833fb5801c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbdlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.749829 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.768901 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.785245 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.794111 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.794203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.794220 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.794245 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.794261 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:02Z","lastTransitionTime":"2026-02-18T00:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.801049 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.823085 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0415bdcb0566e3795b0c5796437e9d0c35761f6836fab59e2de8a8448143c75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.841321 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.864271 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.883072 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.897614 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.897664 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.897682 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.897705 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.897723 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:02Z","lastTransitionTime":"2026-02-18T00:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:02 crc kubenswrapper[4858]: I0218 00:35:02.914072 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:59Z\\\",\\\"message\\\":\\\"ice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:59.393831 6520 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:34:59.394093 6520 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:59.394197 6520 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 00:34:59.394272 6520 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 00:34:59.394831 6520 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 00:34:59.394905 6520 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 00:34:59.394918 6520 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 00:34:59.394999 6520 factory.go:656] Stopping watch factory\\\\nI0218 00:34:59.395029 6520 ovnkube.go:599] Stopped ovnkube\\\\nI0218 00:34:59.395078 6520 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 00:34:59.395089 6520 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jjq7k_openshift-ovn-kubernetes(62c71780-47e7-4e14-9b93-60050f6f3141)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.000485 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.000585 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.000604 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.000633 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.000651 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:03Z","lastTransitionTime":"2026-02-18T00:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.103867 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.103932 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.103955 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.103983 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.104004 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:03Z","lastTransitionTime":"2026-02-18T00:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.207715 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.207779 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.207790 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.207818 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.207832 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:03Z","lastTransitionTime":"2026-02-18T00:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.311532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.311622 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.311645 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.311672 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.311689 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:03Z","lastTransitionTime":"2026-02-18T00:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.394302 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 15:56:34.854090475 +0000 UTC Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.415001 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.415063 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.415080 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.415106 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.415126 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:03Z","lastTransitionTime":"2026-02-18T00:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.418531 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.418594 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.418602 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.418702 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:03 crc kubenswrapper[4858]: E0218 00:35:03.418695 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:03 crc kubenswrapper[4858]: E0218 00:35:03.418877 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:03 crc kubenswrapper[4858]: E0218 00:35:03.419022 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:03 crc kubenswrapper[4858]: E0218 00:35:03.419221 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.518254 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.518308 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.518326 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.518347 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.518364 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:03Z","lastTransitionTime":"2026-02-18T00:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.621991 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.622058 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.622080 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.622105 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.622122 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:03Z","lastTransitionTime":"2026-02-18T00:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.693719 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs\") pod \"network-metrics-daemon-jbdlz\" (UID: \"7064635a-c927-4499-98ce-76833fb5801c\") " pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:03 crc kubenswrapper[4858]: E0218 00:35:03.693923 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:35:03 crc kubenswrapper[4858]: E0218 00:35:03.694025 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs podName:7064635a-c927-4499-98ce-76833fb5801c nodeName:}" failed. No retries permitted until 2026-02-18 00:35:19.693999413 +0000 UTC m=+72.999836175 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs") pod "network-metrics-daemon-jbdlz" (UID: "7064635a-c927-4499-98ce-76833fb5801c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.725846 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.725892 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.725911 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.725935 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.725951 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:03Z","lastTransitionTime":"2026-02-18T00:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.829193 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.829255 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.829272 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.829301 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.829317 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:03Z","lastTransitionTime":"2026-02-18T00:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.932472 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.932565 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.932582 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.932608 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:03 crc kubenswrapper[4858]: I0218 00:35:03.932626 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:03Z","lastTransitionTime":"2026-02-18T00:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.035154 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.035208 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.035226 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.035249 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.035265 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:04Z","lastTransitionTime":"2026-02-18T00:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.142813 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.142880 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.142903 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.142929 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.142947 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:04Z","lastTransitionTime":"2026-02-18T00:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.246224 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.246273 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.246284 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.246302 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.246315 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:04Z","lastTransitionTime":"2026-02-18T00:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.349049 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.349114 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.349130 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.349154 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.349172 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:04Z","lastTransitionTime":"2026-02-18T00:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.394713 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 12:52:34.655193232 +0000 UTC Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.453010 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.453099 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.453125 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.453206 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.453231 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:04Z","lastTransitionTime":"2026-02-18T00:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.559172 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.559239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.559261 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.559312 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.559333 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:04Z","lastTransitionTime":"2026-02-18T00:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.661429 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.661477 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.661531 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.661560 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.661583 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:04Z","lastTransitionTime":"2026-02-18T00:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.764050 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.764104 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.764125 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.764148 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.764164 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:04Z","lastTransitionTime":"2026-02-18T00:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.867045 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.867391 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.867417 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.867437 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.867450 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:04Z","lastTransitionTime":"2026-02-18T00:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.971064 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.971126 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.971138 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.971160 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:04 crc kubenswrapper[4858]: I0218 00:35:04.971172 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:04Z","lastTransitionTime":"2026-02-18T00:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.074592 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.074642 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.074653 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.074672 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.074683 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:05Z","lastTransitionTime":"2026-02-18T00:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.177846 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.177915 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.177933 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.177963 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.177990 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:05Z","lastTransitionTime":"2026-02-18T00:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.281199 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.281269 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.281288 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.281314 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.281330 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:05Z","lastTransitionTime":"2026-02-18T00:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.384238 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.384295 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.384307 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.384323 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.384333 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:05Z","lastTransitionTime":"2026-02-18T00:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.395767 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 04:40:10.94282066 +0000 UTC Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.419090 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.419175 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.419203 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.419153 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:05 crc kubenswrapper[4858]: E0218 00:35:05.419356 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:05 crc kubenswrapper[4858]: E0218 00:35:05.419425 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:05 crc kubenswrapper[4858]: E0218 00:35:05.419508 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:05 crc kubenswrapper[4858]: E0218 00:35:05.419567 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.487851 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.487886 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.487896 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.487912 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.487923 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:05Z","lastTransitionTime":"2026-02-18T00:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.591303 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.591349 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.591361 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.591379 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.591391 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:05Z","lastTransitionTime":"2026-02-18T00:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.694625 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.694687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.694705 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.694730 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.694747 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:05Z","lastTransitionTime":"2026-02-18T00:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.798185 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.798266 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.798291 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.798323 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.798349 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:05Z","lastTransitionTime":"2026-02-18T00:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.901238 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.901299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.901316 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.901349 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:05 crc kubenswrapper[4858]: I0218 00:35:05.901368 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:05Z","lastTransitionTime":"2026-02-18T00:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.003993 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.004068 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.004086 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.004114 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.004131 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:06Z","lastTransitionTime":"2026-02-18T00:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.107042 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.107116 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.107143 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.107172 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.107192 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:06Z","lastTransitionTime":"2026-02-18T00:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.210218 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.210304 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.210328 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.210357 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.210380 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:06Z","lastTransitionTime":"2026-02-18T00:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.313567 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.313631 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.313654 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.313682 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.313705 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:06Z","lastTransitionTime":"2026-02-18T00:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.396767 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 01:07:05.00966896 +0000 UTC Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.416810 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.416860 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.416877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.416900 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.416917 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:06Z","lastTransitionTime":"2026-02-18T00:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.520928 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.520993 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.521009 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.521034 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.521051 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:06Z","lastTransitionTime":"2026-02-18T00:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.624099 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.624185 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.624203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.624228 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.624247 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:06Z","lastTransitionTime":"2026-02-18T00:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.727556 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.727619 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.727637 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.727661 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.727699 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:06Z","lastTransitionTime":"2026-02-18T00:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.831016 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.831068 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.831088 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.831113 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.831131 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:06Z","lastTransitionTime":"2026-02-18T00:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.934641 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.934737 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.934754 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.934777 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:06 crc kubenswrapper[4858]: I0218 00:35:06.934797 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:06Z","lastTransitionTime":"2026-02-18T00:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.038310 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.038408 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.038425 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.038454 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.038473 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:07Z","lastTransitionTime":"2026-02-18T00:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.141375 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.141440 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.141453 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.141477 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.141516 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:07Z","lastTransitionTime":"2026-02-18T00:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.245141 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.245217 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.245236 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.245265 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.245282 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:07Z","lastTransitionTime":"2026-02-18T00:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.348444 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.348543 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.348567 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.348593 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.348611 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:07Z","lastTransitionTime":"2026-02-18T00:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.397724 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 15:14:27.582377718 +0000 UTC Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.419569 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:07 crc kubenswrapper[4858]: E0218 00:35:07.421032 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.421091 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.421062 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:07 crc kubenswrapper[4858]: E0218 00:35:07.421235 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:07 crc kubenswrapper[4858]: E0218 00:35:07.421347 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.421100 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:07 crc kubenswrapper[4858]: E0218 00:35:07.421468 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.442086 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.452741 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.452810 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.452827 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.452853 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.452870 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:07Z","lastTransitionTime":"2026-02-18T00:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.460021 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.485901 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0415bdcb0566e3795b0c5796437e9d0c35761f6836fab59e2de8a8448143c75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.511423 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.533419 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.556480 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.556542 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.556552 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.556568 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.556580 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:07Z","lastTransitionTime":"2026-02-18T00:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.567837 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:59Z\\\",\\\"message\\\":\\\"ice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:59.393831 6520 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:34:59.394093 6520 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:59.394197 6520 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 00:34:59.394272 6520 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 00:34:59.394831 6520 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 00:34:59.394905 6520 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 00:34:59.394918 6520 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 00:34:59.394999 6520 factory.go:656] Stopping watch factory\\\\nI0218 00:34:59.395029 6520 ovnkube.go:599] Stopped ovnkube\\\\nI0218 00:34:59.395078 6520 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 00:34:59.395089 6520 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jjq7k_openshift-ovn-kubernetes(62c71780-47e7-4e14-9b93-60050f6f3141)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.586990 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.605071 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.620697 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.636323 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c37420e-6ee9-4827-be9c-060d919663b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99103e5e72463f14ac12cb5d83f4ebdec29515a3d5082fb46a14b6152bb669f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5ce045b0b2d956d2ceae85fe58a0283e31d62b99a2618add17e477c8ebce86d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gnnml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.653032 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c1a96ad-b27d-4347-9613-ecbf040905aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2840af40ad5521b99a904966a29d6021b2c6d194521b8a0887e06162ae3ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8652de78b1e8343e7c323df13f080deb9e31fad243abed2d7059161c69843a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bbc915ebeff415c6e1d4888ed16a1eba95ff7420756c40907c71b3652a3a166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5211dd317867170b65bf5493cb95d099a21e9498ccb2d7da269599defbf1f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b5211dd317867170b65bf5493cb95d099a21e9498ccb2d7da269599defbf1f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.659122 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.659193 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.659212 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.659241 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.659260 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:07Z","lastTransitionTime":"2026-02-18T00:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.670781 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.687123 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.702971 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.718173 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbdlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7064635a-c927-4499-98ce-76833fb5801c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbdlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.737959 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.756762 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.765247 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.765820 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.765940 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.766057 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.766162 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:07Z","lastTransitionTime":"2026-02-18T00:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.792279 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.792325 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.792339 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.792358 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.792371 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:07Z","lastTransitionTime":"2026-02-18T00:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:07 crc kubenswrapper[4858]: E0218 00:35:07.811678 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.816073 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.816210 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.816300 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.816403 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.816514 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:07Z","lastTransitionTime":"2026-02-18T00:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:07 crc kubenswrapper[4858]: E0218 00:35:07.834960 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.839299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.839365 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.839380 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.839408 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.839426 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:07Z","lastTransitionTime":"2026-02-18T00:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:07 crc kubenswrapper[4858]: E0218 00:35:07.856886 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.861077 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.861125 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.861138 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.861157 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.861170 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:07Z","lastTransitionTime":"2026-02-18T00:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:07 crc kubenswrapper[4858]: E0218 00:35:07.878913 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.883008 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.883071 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.883088 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.883114 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.883133 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:07Z","lastTransitionTime":"2026-02-18T00:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:07 crc kubenswrapper[4858]: E0218 00:35:07.899665 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:07 crc kubenswrapper[4858]: E0218 00:35:07.899816 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.901626 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.901665 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.901679 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.901700 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:07 crc kubenswrapper[4858]: I0218 00:35:07.901715 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:07Z","lastTransitionTime":"2026-02-18T00:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.005162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.005450 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.005533 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.005616 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.005681 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:08Z","lastTransitionTime":"2026-02-18T00:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.108178 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.108421 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.108512 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.108578 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.108632 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:08Z","lastTransitionTime":"2026-02-18T00:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.211696 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.212012 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.212152 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.212369 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.212625 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:08Z","lastTransitionTime":"2026-02-18T00:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.316407 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.316477 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.316538 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.316567 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.316584 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:08Z","lastTransitionTime":"2026-02-18T00:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.398802 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 23:49:45.445836905 +0000 UTC Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.419120 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.419179 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.419204 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.419237 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.419275 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:08Z","lastTransitionTime":"2026-02-18T00:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.522398 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.522465 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.522490 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.522558 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.522583 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:08Z","lastTransitionTime":"2026-02-18T00:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.625434 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.625482 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.625549 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.625589 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.625612 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:08Z","lastTransitionTime":"2026-02-18T00:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.728339 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.728388 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.728404 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.728425 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.728441 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:08Z","lastTransitionTime":"2026-02-18T00:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.831626 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.831757 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.831782 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.831810 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.831831 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:08Z","lastTransitionTime":"2026-02-18T00:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.934209 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.934268 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.934284 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.934308 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:08 crc kubenswrapper[4858]: I0218 00:35:08.934324 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:08Z","lastTransitionTime":"2026-02-18T00:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.037455 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.037598 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.037624 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.037654 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.037677 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:09Z","lastTransitionTime":"2026-02-18T00:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.141245 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.141581 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.141814 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.142016 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.142235 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:09Z","lastTransitionTime":"2026-02-18T00:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.244851 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.245269 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.245557 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.245777 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.245966 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:09Z","lastTransitionTime":"2026-02-18T00:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.349796 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.349890 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.349911 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.349936 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.349953 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:09Z","lastTransitionTime":"2026-02-18T00:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.399575 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 16:51:15.209753375 +0000 UTC Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.419042 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.419388 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.419425 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.419167 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:09 crc kubenswrapper[4858]: E0218 00:35:09.419895 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:09 crc kubenswrapper[4858]: E0218 00:35:09.420082 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:09 crc kubenswrapper[4858]: E0218 00:35:09.420348 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:09 crc kubenswrapper[4858]: E0218 00:35:09.420677 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.453182 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.453594 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.453739 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.453891 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.454021 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:09Z","lastTransitionTime":"2026-02-18T00:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.557929 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.558242 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.558392 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.558564 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.558718 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:09Z","lastTransitionTime":"2026-02-18T00:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.661759 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.661856 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.661873 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.661898 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.661915 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:09Z","lastTransitionTime":"2026-02-18T00:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.765332 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.765384 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.765402 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.765427 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.765445 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:09Z","lastTransitionTime":"2026-02-18T00:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.868986 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.869049 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.869065 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.869091 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.869108 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:09Z","lastTransitionTime":"2026-02-18T00:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.972278 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.972342 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.972360 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.972384 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:09 crc kubenswrapper[4858]: I0218 00:35:09.972404 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:09Z","lastTransitionTime":"2026-02-18T00:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.074657 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.075029 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.075532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.075902 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.076100 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:10Z","lastTransitionTime":"2026-02-18T00:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.178524 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.178938 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.179372 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.179599 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.179825 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:10Z","lastTransitionTime":"2026-02-18T00:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.283536 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.283973 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.284119 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.284252 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.284382 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:10Z","lastTransitionTime":"2026-02-18T00:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.387922 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.387972 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.387993 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.388021 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.388044 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:10Z","lastTransitionTime":"2026-02-18T00:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.400552 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 01:16:24.766072088 +0000 UTC Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.491909 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.491978 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.491995 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.492016 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.492032 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:10Z","lastTransitionTime":"2026-02-18T00:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.595065 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.595125 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.595143 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.595189 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.595207 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:10Z","lastTransitionTime":"2026-02-18T00:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.698558 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.699001 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.699352 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.699732 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.700095 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:10Z","lastTransitionTime":"2026-02-18T00:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.803097 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.803155 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.803171 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.803194 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.803210 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:10Z","lastTransitionTime":"2026-02-18T00:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.905646 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.905960 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.906108 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.906243 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:10 crc kubenswrapper[4858]: I0218 00:35:10.906367 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:10Z","lastTransitionTime":"2026-02-18T00:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.008927 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.008978 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.008990 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.009010 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.009024 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:11Z","lastTransitionTime":"2026-02-18T00:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.111617 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.111896 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.112068 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.112209 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.112348 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:11Z","lastTransitionTime":"2026-02-18T00:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.215532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.215589 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.215606 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.215629 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.215646 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:11Z","lastTransitionTime":"2026-02-18T00:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.318644 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.318703 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.318722 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.318746 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.318763 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:11Z","lastTransitionTime":"2026-02-18T00:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.401198 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 06:38:37.52757027 +0000 UTC Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.418617 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.418851 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.418737 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:11 crc kubenswrapper[4858]: E0218 00:35:11.419054 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:11 crc kubenswrapper[4858]: E0218 00:35:11.419051 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:11 crc kubenswrapper[4858]: E0218 00:35:11.419314 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.419398 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:11 crc kubenswrapper[4858]: E0218 00:35:11.419486 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.421079 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.421113 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.421128 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.421148 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.421162 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:11Z","lastTransitionTime":"2026-02-18T00:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.524280 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.524566 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.524630 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.524704 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.524768 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:11Z","lastTransitionTime":"2026-02-18T00:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.627067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.627101 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.627109 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.627121 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.627130 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:11Z","lastTransitionTime":"2026-02-18T00:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.730143 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.730181 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.730189 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.730203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.730213 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:11Z","lastTransitionTime":"2026-02-18T00:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.833548 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.833613 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.833632 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.833656 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.833674 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:11Z","lastTransitionTime":"2026-02-18T00:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.935987 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.936027 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.936037 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.936054 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:11 crc kubenswrapper[4858]: I0218 00:35:11.936065 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:11Z","lastTransitionTime":"2026-02-18T00:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.038680 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.038727 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.038739 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.038755 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.038767 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:12Z","lastTransitionTime":"2026-02-18T00:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.141552 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.141621 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.141643 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.141671 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.141693 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:12Z","lastTransitionTime":"2026-02-18T00:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.244448 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.244483 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.244517 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.244531 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.244543 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:12Z","lastTransitionTime":"2026-02-18T00:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.347151 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.347205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.347220 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.347241 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.347261 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:12Z","lastTransitionTime":"2026-02-18T00:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.401613 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 17:12:16.875742018 +0000 UTC Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.449592 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.449658 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.449683 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.449707 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.449724 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:12Z","lastTransitionTime":"2026-02-18T00:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.552082 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.552103 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.552111 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.552125 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.552134 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:12Z","lastTransitionTime":"2026-02-18T00:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.654934 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.654974 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.654985 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.655000 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.655011 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:12Z","lastTransitionTime":"2026-02-18T00:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.757671 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.757725 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.757734 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.757750 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.757758 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:12Z","lastTransitionTime":"2026-02-18T00:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.860375 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.860414 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.860424 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.860440 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.860450 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:12Z","lastTransitionTime":"2026-02-18T00:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.962803 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.962864 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.962884 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.962902 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:12 crc kubenswrapper[4858]: I0218 00:35:12.962915 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:12Z","lastTransitionTime":"2026-02-18T00:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.065862 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.065900 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.065912 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.065927 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.065938 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:13Z","lastTransitionTime":"2026-02-18T00:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.168103 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.168142 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.168153 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.168169 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.168181 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:13Z","lastTransitionTime":"2026-02-18T00:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.269704 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.269751 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.269763 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.269785 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.269796 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:13Z","lastTransitionTime":"2026-02-18T00:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.371850 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.371896 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.371907 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.371927 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.371940 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:13Z","lastTransitionTime":"2026-02-18T00:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.402678 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 18:07:26.356506436 +0000 UTC Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.419200 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.419335 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:13 crc kubenswrapper[4858]: E0218 00:35:13.419442 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.419553 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:13 crc kubenswrapper[4858]: E0218 00:35:13.419628 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.419570 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:13 crc kubenswrapper[4858]: E0218 00:35:13.419734 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:13 crc kubenswrapper[4858]: E0218 00:35:13.419797 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.474688 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.474736 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.474750 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.474766 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.474777 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:13Z","lastTransitionTime":"2026-02-18T00:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.576647 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.576689 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.576701 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.576717 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.576729 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:13Z","lastTransitionTime":"2026-02-18T00:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.679550 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.679596 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.679608 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.679625 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.679639 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:13Z","lastTransitionTime":"2026-02-18T00:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.782259 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.782309 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.782325 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.782346 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.782361 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:13Z","lastTransitionTime":"2026-02-18T00:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.884421 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.884474 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.884511 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.884537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.884555 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:13Z","lastTransitionTime":"2026-02-18T00:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.987332 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.987387 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.987399 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.987415 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:13 crc kubenswrapper[4858]: I0218 00:35:13.987426 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:13Z","lastTransitionTime":"2026-02-18T00:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.090128 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.090162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.090170 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.090183 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.090192 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:14Z","lastTransitionTime":"2026-02-18T00:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.193015 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.193056 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.193067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.193087 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.193099 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:14Z","lastTransitionTime":"2026-02-18T00:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.295790 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.295836 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.295854 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.295876 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.295894 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:14Z","lastTransitionTime":"2026-02-18T00:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.398752 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.398799 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.398809 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.398821 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.398847 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:14Z","lastTransitionTime":"2026-02-18T00:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.403132 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 23:24:02.206101851 +0000 UTC Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.501435 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.501468 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.501477 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.501510 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.501519 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:14Z","lastTransitionTime":"2026-02-18T00:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.603854 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.603898 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.603914 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.603934 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.603950 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:14Z","lastTransitionTime":"2026-02-18T00:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.706658 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.706694 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.706706 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.706740 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.706754 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:14Z","lastTransitionTime":"2026-02-18T00:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.809196 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.809253 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.809262 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.809287 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.809317 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:14Z","lastTransitionTime":"2026-02-18T00:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.911850 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.911928 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.911953 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.911986 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:14 crc kubenswrapper[4858]: I0218 00:35:14.912008 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:14Z","lastTransitionTime":"2026-02-18T00:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.014820 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.014883 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.014905 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.014931 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.014951 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:15Z","lastTransitionTime":"2026-02-18T00:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.117687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.117737 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.117755 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.117779 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.117796 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:15Z","lastTransitionTime":"2026-02-18T00:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.219985 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.220027 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.220038 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.220056 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.220068 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:15Z","lastTransitionTime":"2026-02-18T00:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.322778 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.322820 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.322830 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.322847 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.322871 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:15Z","lastTransitionTime":"2026-02-18T00:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.404145 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 11:19:11.194989221 +0000 UTC Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.418528 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.418536 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.418554 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.418554 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:15 crc kubenswrapper[4858]: E0218 00:35:15.418772 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:15 crc kubenswrapper[4858]: E0218 00:35:15.418855 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:15 crc kubenswrapper[4858]: E0218 00:35:15.418910 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:15 crc kubenswrapper[4858]: E0218 00:35:15.418966 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.425134 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.425172 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.425184 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.425201 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.425212 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:15Z","lastTransitionTime":"2026-02-18T00:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.528186 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.528253 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.528270 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.528294 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.528312 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:15Z","lastTransitionTime":"2026-02-18T00:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.631635 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.631791 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.631876 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.631957 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.631995 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:15Z","lastTransitionTime":"2026-02-18T00:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.734886 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.734916 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.734929 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.734946 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.734959 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:15Z","lastTransitionTime":"2026-02-18T00:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.837239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.837278 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.837286 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.837301 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.837310 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:15Z","lastTransitionTime":"2026-02-18T00:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.940381 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.940429 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.940447 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.940470 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:15 crc kubenswrapper[4858]: I0218 00:35:15.940486 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:15Z","lastTransitionTime":"2026-02-18T00:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.043765 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.043812 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.043828 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.043851 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.043867 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:16Z","lastTransitionTime":"2026-02-18T00:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.146372 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.146411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.146422 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.146441 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.146453 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:16Z","lastTransitionTime":"2026-02-18T00:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.248753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.248798 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.248809 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.248824 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.248835 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:16Z","lastTransitionTime":"2026-02-18T00:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.351104 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.351178 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.351202 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.351231 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.351253 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:16Z","lastTransitionTime":"2026-02-18T00:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.404570 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 12:47:10.970672321 +0000 UTC Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.453859 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.453914 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.453924 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.453939 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.453948 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:16Z","lastTransitionTime":"2026-02-18T00:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.556228 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.556288 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.556305 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.556323 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.556338 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:16Z","lastTransitionTime":"2026-02-18T00:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.659459 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.659506 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.659515 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.659527 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.659534 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:16Z","lastTransitionTime":"2026-02-18T00:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.762855 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.762877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.762887 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.762897 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.762905 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:16Z","lastTransitionTime":"2026-02-18T00:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.865611 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.865652 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.865664 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.865679 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.865690 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:16Z","lastTransitionTime":"2026-02-18T00:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.967660 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.967716 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.967730 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.967744 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:16 crc kubenswrapper[4858]: I0218 00:35:16.967754 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:16Z","lastTransitionTime":"2026-02-18T00:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.070229 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.070276 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.070294 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.070316 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.070333 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:17Z","lastTransitionTime":"2026-02-18T00:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.178784 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.178841 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.178858 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.178887 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.178905 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:17Z","lastTransitionTime":"2026-02-18T00:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.280813 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.280847 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.280857 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.280871 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.280882 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:17Z","lastTransitionTime":"2026-02-18T00:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.383135 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.383167 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.383177 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.383188 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.383197 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:17Z","lastTransitionTime":"2026-02-18T00:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.405604 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 19:35:09.974062828 +0000 UTC Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.419018 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.419370 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.419392 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:17 crc kubenswrapper[4858]: E0218 00:35:17.419509 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.419559 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:17 crc kubenswrapper[4858]: E0218 00:35:17.419686 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:17 crc kubenswrapper[4858]: E0218 00:35:17.419751 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:17 crc kubenswrapper[4858]: E0218 00:35:17.419803 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.419941 4858 scope.go:117] "RemoveContainer" containerID="1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36" Feb 18 00:35:17 crc kubenswrapper[4858]: E0218 00:35:17.420182 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jjq7k_openshift-ovn-kubernetes(62c71780-47e7-4e14-9b93-60050f6f3141)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.440425 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.458878 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.474556 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.486267 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.486311 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.486320 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.486334 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.486344 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:17Z","lastTransitionTime":"2026-02-18T00:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.487236 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.504285 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0415bdcb0566e3795b0c5796437e9d0c35761f6836fab59e2de8a8448143c75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.522561 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.538470 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.551844 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.574071 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:59Z\\\",\\\"message\\\":\\\"ice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:59.393831 6520 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:34:59.394093 6520 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:59.394197 6520 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 00:34:59.394272 6520 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 00:34:59.394831 6520 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 00:34:59.394905 6520 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 00:34:59.394918 6520 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 00:34:59.394999 6520 factory.go:656] Stopping watch factory\\\\nI0218 00:34:59.395029 6520 ovnkube.go:599] Stopped ovnkube\\\\nI0218 00:34:59.395078 6520 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 00:34:59.395089 6520 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jjq7k_openshift-ovn-kubernetes(62c71780-47e7-4e14-9b93-60050f6f3141)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.587760 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c1a96ad-b27d-4347-9613-ecbf040905aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2840af40ad5521b99a904966a29d6021b2c6d194521b8a0887e06162ae3ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8652de78b1e8343e7c323df13f080deb9e31fad243abed2d7059161c69843a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bbc915ebeff415c6e1d4888ed16a1eba95ff7420756c40907c71b3652a3a166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5211dd317867170b65bf5493cb95d099a21e9498ccb2d7da269599defbf1f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b5211dd317867170b65bf5493cb95d099a21e9498ccb2d7da269599defbf1f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.589411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.589595 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.589628 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.589665 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.589693 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:17Z","lastTransitionTime":"2026-02-18T00:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.607120 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.621988 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.637184 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c37420e-6ee9-4827-be9c-060d919663b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99103e5e72463f14ac12cb5d83f4ebdec29515a3d5082fb46a14b6152bb669f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5ce045b0b2d956d2ceae85fe58a0283e31d62b99a2618add17e477c8ebce86d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gnnml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.654788 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.670542 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.689892 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.691992 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.692052 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.692069 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.692098 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.692115 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:17Z","lastTransitionTime":"2026-02-18T00:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.705665 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbdlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7064635a-c927-4499-98ce-76833fb5801c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbdlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.794005 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.794040 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.794051 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.794072 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.794085 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:17Z","lastTransitionTime":"2026-02-18T00:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.896118 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.896159 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.896170 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.896186 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.896197 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:17Z","lastTransitionTime":"2026-02-18T00:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.998057 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.998084 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.998092 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.998105 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:17 crc kubenswrapper[4858]: I0218 00:35:17.998113 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:17Z","lastTransitionTime":"2026-02-18T00:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.100328 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.100354 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.100361 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.100373 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.100381 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:18Z","lastTransitionTime":"2026-02-18T00:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.148110 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.148155 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.148172 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.148194 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.148211 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:18Z","lastTransitionTime":"2026-02-18T00:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:18 crc kubenswrapper[4858]: E0218 00:35:18.167167 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.171773 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.171799 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.171808 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.171818 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.171827 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:18Z","lastTransitionTime":"2026-02-18T00:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:18 crc kubenswrapper[4858]: E0218 00:35:18.183578 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.188179 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.188221 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.188238 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.188259 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.188276 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:18Z","lastTransitionTime":"2026-02-18T00:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:18 crc kubenswrapper[4858]: E0218 00:35:18.206346 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.210781 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.210826 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.210843 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.210867 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.210883 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:18Z","lastTransitionTime":"2026-02-18T00:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:18 crc kubenswrapper[4858]: E0218 00:35:18.228703 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.232797 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.232866 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.232886 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.232910 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.232930 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:18Z","lastTransitionTime":"2026-02-18T00:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:18 crc kubenswrapper[4858]: E0218 00:35:18.251104 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:18 crc kubenswrapper[4858]: E0218 00:35:18.251220 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.252602 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.252653 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.252676 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.252708 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.252732 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:18Z","lastTransitionTime":"2026-02-18T00:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.355410 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.355517 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.355535 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.355556 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.355571 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:18Z","lastTransitionTime":"2026-02-18T00:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.406233 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 10:51:17.50885157 +0000 UTC Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.458822 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.458854 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.458865 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.458879 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.458889 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:18Z","lastTransitionTime":"2026-02-18T00:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.561021 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.561056 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.561064 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.561077 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.561086 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:18Z","lastTransitionTime":"2026-02-18T00:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.663995 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.664243 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.664261 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.664285 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.664308 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:18Z","lastTransitionTime":"2026-02-18T00:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.766552 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.766594 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.766603 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.766618 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.766627 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:18Z","lastTransitionTime":"2026-02-18T00:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.868485 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.868548 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.868557 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.868572 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.868583 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:18Z","lastTransitionTime":"2026-02-18T00:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.970139 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.970171 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.970178 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.970192 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:18 crc kubenswrapper[4858]: I0218 00:35:18.970200 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:18Z","lastTransitionTime":"2026-02-18T00:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.072779 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.072848 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.072870 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.072904 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.072926 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:19Z","lastTransitionTime":"2026-02-18T00:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.175004 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.175050 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.175063 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.175083 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.175097 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:19Z","lastTransitionTime":"2026-02-18T00:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.277918 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.277969 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.277985 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.278006 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.278022 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:19Z","lastTransitionTime":"2026-02-18T00:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.380427 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.380460 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.380470 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.380483 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.380510 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:19Z","lastTransitionTime":"2026-02-18T00:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.407035 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 00:10:05.358812653 +0000 UTC Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.419416 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.419594 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:19 crc kubenswrapper[4858]: E0218 00:35:19.419632 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.419684 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:19 crc kubenswrapper[4858]: E0218 00:35:19.420216 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:19 crc kubenswrapper[4858]: E0218 00:35:19.420407 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.420446 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:19 crc kubenswrapper[4858]: E0218 00:35:19.420712 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.482690 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.482741 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.482758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.482780 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.482797 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:19Z","lastTransitionTime":"2026-02-18T00:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.585148 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.585185 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.585193 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.585206 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.585215 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:19Z","lastTransitionTime":"2026-02-18T00:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.688125 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.688162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.688180 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.688202 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.688220 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:19Z","lastTransitionTime":"2026-02-18T00:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.790251 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs\") pod \"network-metrics-daemon-jbdlz\" (UID: \"7064635a-c927-4499-98ce-76833fb5801c\") " pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:19 crc kubenswrapper[4858]: E0218 00:35:19.790546 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:35:19 crc kubenswrapper[4858]: E0218 00:35:19.790681 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs podName:7064635a-c927-4499-98ce-76833fb5801c nodeName:}" failed. No retries permitted until 2026-02-18 00:35:51.79064378 +0000 UTC m=+105.096480562 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs") pod "network-metrics-daemon-jbdlz" (UID: "7064635a-c927-4499-98ce-76833fb5801c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.791161 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.791219 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.791229 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.791246 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.791256 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:19Z","lastTransitionTime":"2026-02-18T00:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.893006 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.893416 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.893607 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.893779 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.893906 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:19Z","lastTransitionTime":"2026-02-18T00:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.996734 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.996776 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.996788 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.996804 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:19 crc kubenswrapper[4858]: I0218 00:35:19.996815 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:19Z","lastTransitionTime":"2026-02-18T00:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.099327 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.099364 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.099373 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.099384 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.099395 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:20Z","lastTransitionTime":"2026-02-18T00:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.201518 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.201557 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.201567 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.201582 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.201595 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:20Z","lastTransitionTime":"2026-02-18T00:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.304101 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.304137 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.304145 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.304161 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.304187 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:20Z","lastTransitionTime":"2026-02-18T00:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.406934 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.406975 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.406984 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.406998 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.407009 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:20Z","lastTransitionTime":"2026-02-18T00:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.407304 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 20:48:22.797157829 +0000 UTC Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.508745 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.508774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.508782 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.508795 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.508805 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:20Z","lastTransitionTime":"2026-02-18T00:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.611249 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.611282 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.611291 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.611304 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.611313 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:20Z","lastTransitionTime":"2026-02-18T00:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.713592 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.713684 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.713702 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.713725 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.713743 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:20Z","lastTransitionTime":"2026-02-18T00:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.816439 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.816538 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.816566 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.816593 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.816609 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:20Z","lastTransitionTime":"2026-02-18T00:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.918837 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.918905 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.918926 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.918950 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.918968 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:20Z","lastTransitionTime":"2026-02-18T00:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.927295 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sr8bs_631d8e25-82dd-4462-b98d-f076e7264b67/kube-multus/0.log" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.927345 4858 generic.go:334] "Generic (PLEG): container finished" podID="631d8e25-82dd-4462-b98d-f076e7264b67" containerID="fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4" exitCode=1 Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.927387 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sr8bs" event={"ID":"631d8e25-82dd-4462-b98d-f076e7264b67","Type":"ContainerDied","Data":"fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4"} Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.928040 4858 scope.go:117] "RemoveContainer" containerID="fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.944823 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:35:20Z\\\",\\\"message\\\":\\\"2026-02-18T00:34:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5f0c0f4d-5a90-417a-be2e-03607ebc1c73\\\\n2026-02-18T00:34:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5f0c0f4d-5a90-417a-be2e-03607ebc1c73 to /host/opt/cni/bin/\\\\n2026-02-18T00:34:35Z [verbose] multus-daemon started\\\\n2026-02-18T00:34:35Z [verbose] Readiness Indicator file check\\\\n2026-02-18T00:35:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:20Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.964292 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:20Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:20 crc kubenswrapper[4858]: I0218 00:35:20.990944 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:59Z\\\",\\\"message\\\":\\\"ice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:59.393831 6520 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:34:59.394093 6520 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:59.394197 6520 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 00:34:59.394272 6520 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 00:34:59.394831 6520 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 00:34:59.394905 6520 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 00:34:59.394918 6520 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 00:34:59.394999 6520 factory.go:656] Stopping watch factory\\\\nI0218 00:34:59.395029 6520 ovnkube.go:599] Stopped ovnkube\\\\nI0218 00:34:59.395078 6520 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 00:34:59.395089 6520 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jjq7k_openshift-ovn-kubernetes(62c71780-47e7-4e14-9b93-60050f6f3141)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:20Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.004543 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:21Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.017590 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:21Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.022020 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.022048 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.022081 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.022096 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.022109 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:21Z","lastTransitionTime":"2026-02-18T00:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.027750 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:21Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.039114 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c37420e-6ee9-4827-be9c-060d919663b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99103e5e72463f14ac12cb5d83f4ebdec29515a3d5082fb46a14b6152bb669f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5ce045b0b2d956d2ceae85fe58a0283e31d62b99a2618add17e477c8ebce86d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gnnml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:21Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.050843 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c1a96ad-b27d-4347-9613-ecbf040905aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2840af40ad5521b99a904966a29d6021b2c6d194521b8a0887e06162ae3ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8652de78b1e8343e7c323df13f080deb9e31fad243abed2d7059161c69843a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bbc915ebeff415c6e1d4888ed16a1eba95ff7420756c40907c71b3652a3a166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5211dd317867170b65bf5493cb95d099a21e9498ccb2d7da269599defbf1f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b5211dd317867170b65bf5493cb95d099a21e9498ccb2d7da269599defbf1f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:21Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.065352 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:21Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.077960 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:21Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.092148 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:21Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.103466 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbdlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7064635a-c927-4499-98ce-76833fb5801c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbdlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:21Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.116360 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:21Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.124858 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.124912 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.124923 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.124945 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.124967 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:21Z","lastTransitionTime":"2026-02-18T00:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.127730 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:21Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.138948 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:21Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.147590 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:21Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.167769 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0415bdcb0566e3795b0c5796437e9d0c35761f6836fab59e2de8a8448143c75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:21Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.236732 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.236794 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.236813 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.236849 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.236865 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:21Z","lastTransitionTime":"2026-02-18T00:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.339947 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.340017 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.340035 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.340060 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.340077 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:21Z","lastTransitionTime":"2026-02-18T00:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.407957 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 18:26:21.101544779 +0000 UTC Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.419393 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.419435 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.419567 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:21 crc kubenswrapper[4858]: E0218 00:35:21.419742 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.419833 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:21 crc kubenswrapper[4858]: E0218 00:35:21.419920 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:21 crc kubenswrapper[4858]: E0218 00:35:21.419997 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:21 crc kubenswrapper[4858]: E0218 00:35:21.420317 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.442961 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.443003 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.443014 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.443030 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.443043 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:21Z","lastTransitionTime":"2026-02-18T00:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.545174 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.545212 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.545221 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.545235 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.545285 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:21Z","lastTransitionTime":"2026-02-18T00:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.647411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.647469 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.647536 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.647571 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.647597 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:21Z","lastTransitionTime":"2026-02-18T00:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.750386 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.750478 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.750546 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.750578 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.750596 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:21Z","lastTransitionTime":"2026-02-18T00:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.854515 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.854582 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.854595 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.854619 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.854635 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:21Z","lastTransitionTime":"2026-02-18T00:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.934992 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sr8bs_631d8e25-82dd-4462-b98d-f076e7264b67/kube-multus/0.log" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.935061 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sr8bs" event={"ID":"631d8e25-82dd-4462-b98d-f076e7264b67","Type":"ContainerStarted","Data":"1c99f89af61fd7a5275a9556f567446675b39146de6e8b14b5ec7c475c26a413"} Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.951780 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:21Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.957198 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.957291 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.957312 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.957375 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.957398 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:21Z","lastTransitionTime":"2026-02-18T00:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.971326 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:21Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:21 crc kubenswrapper[4858]: I0218 00:35:21.987223 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:21Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.000123 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:21Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.023254 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0415bdcb0566e3795b0c5796437e9d0c35761f6836fab59e2de8a8448143c75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:22Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.040017 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:22Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.057421 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c99f89af61fd7a5275a9556f567446675b39146de6e8b14b5ec7c475c26a413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:35:20Z\\\",\\\"message\\\":\\\"2026-02-18T00:34:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5f0c0f4d-5a90-417a-be2e-03607ebc1c73\\\\n2026-02-18T00:34:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5f0c0f4d-5a90-417a-be2e-03607ebc1c73 to /host/opt/cni/bin/\\\\n2026-02-18T00:34:35Z [verbose] multus-daemon started\\\\n2026-02-18T00:34:35Z [verbose] Readiness Indicator file check\\\\n2026-02-18T00:35:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:22Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.060060 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.060202 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.060293 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.060673 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.060766 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:22Z","lastTransitionTime":"2026-02-18T00:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.072972 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:22Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.106129 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:59Z\\\",\\\"message\\\":\\\"ice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:59.393831 6520 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:34:59.394093 6520 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:59.394197 6520 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 00:34:59.394272 6520 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 00:34:59.394831 6520 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 00:34:59.394905 6520 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 00:34:59.394918 6520 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 00:34:59.394999 6520 factory.go:656] Stopping watch factory\\\\nI0218 00:34:59.395029 6520 ovnkube.go:599] Stopped ovnkube\\\\nI0218 00:34:59.395078 6520 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 00:34:59.395089 6520 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jjq7k_openshift-ovn-kubernetes(62c71780-47e7-4e14-9b93-60050f6f3141)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:22Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.127239 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c1a96ad-b27d-4347-9613-ecbf040905aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2840af40ad5521b99a904966a29d6021b2c6d194521b8a0887e06162ae3ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8652de78b1e8343e7c323df13f080deb9e31fad243abed2d7059161c69843a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bbc915ebeff415c6e1d4888ed16a1eba95ff7420756c40907c71b3652a3a166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5211dd317867170b65bf5493cb95d099a21e9498ccb2d7da269599defbf1f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b5211dd317867170b65bf5493cb95d099a21e9498ccb2d7da269599defbf1f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:22Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.148149 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:22Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.165312 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.165554 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:22Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.165649 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.165972 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.166197 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.166345 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:22Z","lastTransitionTime":"2026-02-18T00:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.183892 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c37420e-6ee9-4827-be9c-060d919663b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99103e5e72463f14ac12cb5d83f4ebdec29515a3d5082fb46a14b6152bb669f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5ce045b0b2d956d2ceae85fe58a0283e31d62b99a2618add17e477c8ebce86d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gnnml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:22Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.206563 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:22Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.225675 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:22Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.246271 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:22Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.260940 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbdlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7064635a-c927-4499-98ce-76833fb5801c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbdlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:22Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.269937 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.269992 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.270000 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.270014 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.270022 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:22Z","lastTransitionTime":"2026-02-18T00:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.373391 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.373436 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.373449 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.373467 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.373479 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:22Z","lastTransitionTime":"2026-02-18T00:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.408140 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 22:12:16.844426846 +0000 UTC Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.476321 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.476388 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.476401 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.476422 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.476467 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:22Z","lastTransitionTime":"2026-02-18T00:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.578908 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.579684 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.579890 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.580125 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.580329 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:22Z","lastTransitionTime":"2026-02-18T00:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.683839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.683898 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.683916 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.683941 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.683958 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:22Z","lastTransitionTime":"2026-02-18T00:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.786470 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.786600 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.786618 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.786642 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.786659 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:22Z","lastTransitionTime":"2026-02-18T00:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.893639 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.894382 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.894425 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.894453 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.894473 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:22Z","lastTransitionTime":"2026-02-18T00:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.997892 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.997937 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.997953 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.997975 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:22 crc kubenswrapper[4858]: I0218 00:35:22.997992 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:22Z","lastTransitionTime":"2026-02-18T00:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.100357 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.100425 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.100448 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.100475 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.100536 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:23Z","lastTransitionTime":"2026-02-18T00:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.202868 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.202923 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.202940 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.202964 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.202981 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:23Z","lastTransitionTime":"2026-02-18T00:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.305919 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.305998 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.306022 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.306052 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.306069 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:23Z","lastTransitionTime":"2026-02-18T00:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.408426 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.408452 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.408425 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 08:53:47.567222969 +0000 UTC Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.408460 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.408505 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.408520 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:23Z","lastTransitionTime":"2026-02-18T00:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.419088 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:23 crc kubenswrapper[4858]: E0218 00:35:23.419188 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.419246 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.419273 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:23 crc kubenswrapper[4858]: E0218 00:35:23.419472 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.419582 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:23 crc kubenswrapper[4858]: E0218 00:35:23.419670 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:23 crc kubenswrapper[4858]: E0218 00:35:23.419781 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.511313 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.511349 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.511359 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.511374 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.511385 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:23Z","lastTransitionTime":"2026-02-18T00:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.613648 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.613670 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.613680 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.613693 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.613704 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:23Z","lastTransitionTime":"2026-02-18T00:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.716758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.716806 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.716821 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.716842 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.716858 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:23Z","lastTransitionTime":"2026-02-18T00:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.819270 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.819353 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.819373 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.819396 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.819416 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:23Z","lastTransitionTime":"2026-02-18T00:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.922013 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.922080 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.922098 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.922122 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:23 crc kubenswrapper[4858]: I0218 00:35:23.922139 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:23Z","lastTransitionTime":"2026-02-18T00:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.024901 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.025323 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.025544 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.025702 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.025888 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:24Z","lastTransitionTime":"2026-02-18T00:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.129332 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.129389 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.129406 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.129429 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.129468 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:24Z","lastTransitionTime":"2026-02-18T00:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.232042 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.232411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.232663 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.232846 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.233017 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:24Z","lastTransitionTime":"2026-02-18T00:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.336146 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.336377 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.336469 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.336568 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.336636 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:24Z","lastTransitionTime":"2026-02-18T00:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.409190 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 14:09:50.91757172 +0000 UTC Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.438900 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.439108 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.439180 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.439245 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.439303 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:24Z","lastTransitionTime":"2026-02-18T00:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.542613 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.542701 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.542720 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.542745 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.542763 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:24Z","lastTransitionTime":"2026-02-18T00:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.644921 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.645647 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.645671 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.645690 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.645699 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:24Z","lastTransitionTime":"2026-02-18T00:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.748597 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.748670 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.748692 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.748723 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.748746 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:24Z","lastTransitionTime":"2026-02-18T00:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.851443 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.851473 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.851482 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.851519 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.851529 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:24Z","lastTransitionTime":"2026-02-18T00:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.953597 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.953632 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.953641 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.953653 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:24 crc kubenswrapper[4858]: I0218 00:35:24.953662 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:24Z","lastTransitionTime":"2026-02-18T00:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.056397 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.056455 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.056472 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.056522 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.056540 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:25Z","lastTransitionTime":"2026-02-18T00:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.160401 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.160475 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.160536 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.160581 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.160602 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:25Z","lastTransitionTime":"2026-02-18T00:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.262900 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.262957 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.262973 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.262996 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.263013 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:25Z","lastTransitionTime":"2026-02-18T00:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.365525 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.365590 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.365607 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.365631 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.365649 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:25Z","lastTransitionTime":"2026-02-18T00:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.409989 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 12:49:05.55041335 +0000 UTC Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.419435 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.419536 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:25 crc kubenswrapper[4858]: E0218 00:35:25.419701 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.419952 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:25 crc kubenswrapper[4858]: E0218 00:35:25.420134 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.420173 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:25 crc kubenswrapper[4858]: E0218 00:35:25.420474 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:25 crc kubenswrapper[4858]: E0218 00:35:25.420607 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.469482 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.469656 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.469687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.469772 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.469796 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:25Z","lastTransitionTime":"2026-02-18T00:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.572068 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.572296 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.572383 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.572465 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.572580 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:25Z","lastTransitionTime":"2026-02-18T00:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.675778 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.676623 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.676801 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.676956 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.677105 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:25Z","lastTransitionTime":"2026-02-18T00:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.779768 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.779796 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.779805 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.779818 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.779827 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:25Z","lastTransitionTime":"2026-02-18T00:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.882790 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.882846 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.882867 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.882897 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.882917 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:25Z","lastTransitionTime":"2026-02-18T00:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.984910 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.985306 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.985490 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.985973 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:25 crc kubenswrapper[4858]: I0218 00:35:25.986276 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:25Z","lastTransitionTime":"2026-02-18T00:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.089392 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.089839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.090008 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.090145 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.090282 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:26Z","lastTransitionTime":"2026-02-18T00:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.193689 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.193737 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.193787 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.194201 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.194254 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:26Z","lastTransitionTime":"2026-02-18T00:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.297133 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.297178 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.297194 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.297216 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.297232 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:26Z","lastTransitionTime":"2026-02-18T00:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.399747 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.399831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.399853 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.399883 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.399906 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:26Z","lastTransitionTime":"2026-02-18T00:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.410658 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 11:27:27.806610942 +0000 UTC Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.432857 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.503571 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.503642 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.503666 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.503693 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.503715 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:26Z","lastTransitionTime":"2026-02-18T00:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.606928 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.607001 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.607025 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.607055 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.607073 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:26Z","lastTransitionTime":"2026-02-18T00:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.710527 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.710571 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.710588 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.710609 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.710625 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:26Z","lastTransitionTime":"2026-02-18T00:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.813429 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.813547 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.813575 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.813646 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.813675 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:26Z","lastTransitionTime":"2026-02-18T00:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.916723 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.916783 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.916800 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.916821 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:26 crc kubenswrapper[4858]: I0218 00:35:26.916837 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:26Z","lastTransitionTime":"2026-02-18T00:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.019186 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.019238 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.019255 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.019279 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.019298 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:27Z","lastTransitionTime":"2026-02-18T00:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.122251 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.122296 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.122315 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.122337 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.122353 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:27Z","lastTransitionTime":"2026-02-18T00:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.225847 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.225922 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.225939 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.225962 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.226055 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:27Z","lastTransitionTime":"2026-02-18T00:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.328549 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.328616 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.328634 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.328660 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.328678 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:27Z","lastTransitionTime":"2026-02-18T00:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.411003 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 23:50:22.636532512 +0000 UTC Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.419327 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.419427 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:27 crc kubenswrapper[4858]: E0218 00:35:27.419745 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.419788 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.419847 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:27 crc kubenswrapper[4858]: E0218 00:35:27.420102 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:27 crc kubenswrapper[4858]: E0218 00:35:27.420223 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:27 crc kubenswrapper[4858]: E0218 00:35:27.420459 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.431450 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.431546 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.431573 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.431603 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.431625 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:27Z","lastTransitionTime":"2026-02-18T00:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.440379 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.461068 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.479608 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.496398 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.527784 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0415bdcb0566e3795b0c5796437e9d0c35761f6836fab59e2de8a8448143c75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.536774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.536892 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.536914 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.536951 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.536970 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:27Z","lastTransitionTime":"2026-02-18T00:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.559250 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c99f89af61fd7a5275a9556f567446675b39146de6e8b14b5ec7c475c26a413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:35:20Z\\\",\\\"message\\\":\\\"2026-02-18T00:34:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5f0c0f4d-5a90-417a-be2e-03607ebc1c73\\\\n2026-02-18T00:34:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5f0c0f4d-5a90-417a-be2e-03607ebc1c73 to /host/opt/cni/bin/\\\\n2026-02-18T00:34:35Z [verbose] multus-daemon started\\\\n2026-02-18T00:34:35Z [verbose] Readiness Indicator file check\\\\n2026-02-18T00:35:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.578697 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.603208 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:59Z\\\",\\\"message\\\":\\\"ice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:59.393831 6520 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:34:59.394093 6520 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:59.394197 6520 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 00:34:59.394272 6520 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 00:34:59.394831 6520 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 00:34:59.394905 6520 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 00:34:59.394918 6520 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 00:34:59.394999 6520 factory.go:656] Stopping watch factory\\\\nI0218 00:34:59.395029 6520 ovnkube.go:599] Stopped ovnkube\\\\nI0218 00:34:59.395078 6520 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 00:34:59.395089 6520 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jjq7k_openshift-ovn-kubernetes(62c71780-47e7-4e14-9b93-60050f6f3141)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.619692 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.636260 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.640631 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.640780 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.640811 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.640845 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.640867 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:27Z","lastTransitionTime":"2026-02-18T00:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.649936 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.663608 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c37420e-6ee9-4827-be9c-060d919663b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99103e5e72463f14ac12cb5d83f4ebdec29515a3d5082fb46a14b6152bb669f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5ce045b0b2d956d2ceae85fe58a0283e31d62b99a2618add17e477c8ebce86d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gnnml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.681080 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c1a96ad-b27d-4347-9613-ecbf040905aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2840af40ad5521b99a904966a29d6021b2c6d194521b8a0887e06162ae3ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8652de78b1e8343e7c323df13f080deb9e31fad243abed2d7059161c69843a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bbc915ebeff415c6e1d4888ed16a1eba95ff7420756c40907c71b3652a3a166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5211dd317867170b65bf5493cb95d099a21e9498ccb2d7da269599defbf1f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b5211dd317867170b65bf5493cb95d099a21e9498ccb2d7da269599defbf1f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.702688 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.721432 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.742936 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.744910 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.745115 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.745188 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.745740 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.745768 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:27Z","lastTransitionTime":"2026-02-18T00:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.763325 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbdlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7064635a-c927-4499-98ce-76833fb5801c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbdlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.779690 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6889a266-e6ba-4995-8dea-6768bf9d6ad0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8023934d83dff5d479338ea0a0f8a97f50b10c8b8ac1fafe3a6f852e442981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f7ac7489e07b41d0a77d4812ea728d416a47fa2320ee63fa63de2cff00a520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f7ac7489e07b41d0a77d4812ea728d416a47fa2320ee63fa63de2cff00a520\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.849590 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.849659 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.849685 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.849719 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.849744 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:27Z","lastTransitionTime":"2026-02-18T00:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.952284 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.952343 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.952362 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.952387 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:27 crc kubenswrapper[4858]: I0218 00:35:27.952407 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:27Z","lastTransitionTime":"2026-02-18T00:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.056136 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.056202 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.056222 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.056250 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.056272 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:28Z","lastTransitionTime":"2026-02-18T00:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.159192 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.159269 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.159293 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.159323 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.159475 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:28Z","lastTransitionTime":"2026-02-18T00:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.263033 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.263099 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.263118 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.263143 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.263160 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:28Z","lastTransitionTime":"2026-02-18T00:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.354605 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.354681 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.354703 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.354732 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.354754 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:28Z","lastTransitionTime":"2026-02-18T00:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:28 crc kubenswrapper[4858]: E0218 00:35:28.376856 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:28Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.382206 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.382462 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.382688 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.382839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.382983 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:28Z","lastTransitionTime":"2026-02-18T00:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:28 crc kubenswrapper[4858]: E0218 00:35:28.404952 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:28Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.413775 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 09:31:06.332146129 +0000 UTC Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.414217 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.414297 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.414327 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.414362 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.414395 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:28Z","lastTransitionTime":"2026-02-18T00:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:28 crc kubenswrapper[4858]: E0218 00:35:28.437706 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:28Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.442902 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.442971 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.442993 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.443019 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.443041 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:28Z","lastTransitionTime":"2026-02-18T00:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:28 crc kubenswrapper[4858]: E0218 00:35:28.463691 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:28Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.468522 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.468655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.468677 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.468706 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.468723 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:28Z","lastTransitionTime":"2026-02-18T00:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:28 crc kubenswrapper[4858]: E0218 00:35:28.488670 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:28Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:28 crc kubenswrapper[4858]: E0218 00:35:28.489007 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.491763 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.491837 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.491861 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.491894 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.491914 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:28Z","lastTransitionTime":"2026-02-18T00:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.595704 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.595784 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.595806 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.595831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.595852 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:28Z","lastTransitionTime":"2026-02-18T00:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.698897 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.698993 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.699013 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.699070 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.699088 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:28Z","lastTransitionTime":"2026-02-18T00:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.802055 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.802099 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.802110 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.802129 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.802141 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:28Z","lastTransitionTime":"2026-02-18T00:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.904955 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.905031 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.905055 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.905085 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:28 crc kubenswrapper[4858]: I0218 00:35:28.905106 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:28Z","lastTransitionTime":"2026-02-18T00:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.007306 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.007377 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.007400 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.007427 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.007445 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:29Z","lastTransitionTime":"2026-02-18T00:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.110377 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.111626 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.111954 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.113081 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.113680 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:29Z","lastTransitionTime":"2026-02-18T00:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.217490 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.217560 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.217576 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.217601 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.217617 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:29Z","lastTransitionTime":"2026-02-18T00:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.321107 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.321437 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.321736 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.321990 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.322988 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:29Z","lastTransitionTime":"2026-02-18T00:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.414561 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 12:08:02.202162192 +0000 UTC Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.419074 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:29 crc kubenswrapper[4858]: E0218 00:35:29.419232 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.419334 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:29 crc kubenswrapper[4858]: E0218 00:35:29.419416 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.419695 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.419885 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:29 crc kubenswrapper[4858]: E0218 00:35:29.420180 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:29 crc kubenswrapper[4858]: E0218 00:35:29.420324 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.426187 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.426250 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.426274 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.426303 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.426326 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:29Z","lastTransitionTime":"2026-02-18T00:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.529271 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.529322 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.529342 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.529365 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.529382 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:29Z","lastTransitionTime":"2026-02-18T00:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.632817 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.632875 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.632893 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.632923 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.632940 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:29Z","lastTransitionTime":"2026-02-18T00:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.736006 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.736073 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.736095 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.736115 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.736129 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:29Z","lastTransitionTime":"2026-02-18T00:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.839835 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.839896 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.839911 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.839934 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.839952 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:29Z","lastTransitionTime":"2026-02-18T00:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.942841 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.942900 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.942919 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.942941 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:29 crc kubenswrapper[4858]: I0218 00:35:29.942958 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:29Z","lastTransitionTime":"2026-02-18T00:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.046455 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.046550 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.046570 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.046596 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.046613 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:30Z","lastTransitionTime":"2026-02-18T00:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.149710 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.149758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.149772 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.149793 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.149810 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:30Z","lastTransitionTime":"2026-02-18T00:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.252439 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.252479 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.252506 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.252524 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.252534 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:30Z","lastTransitionTime":"2026-02-18T00:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.355863 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.355923 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.355938 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.355958 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.355969 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:30Z","lastTransitionTime":"2026-02-18T00:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.415107 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 18:24:37.373602859 +0000 UTC Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.459561 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.459640 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.459664 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.459700 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.459724 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:30Z","lastTransitionTime":"2026-02-18T00:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.563416 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.563479 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.563547 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.563578 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.563600 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:30Z","lastTransitionTime":"2026-02-18T00:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.669343 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.669410 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.669435 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.669466 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.669489 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:30Z","lastTransitionTime":"2026-02-18T00:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.772624 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.772704 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.772726 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.772754 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.772775 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:30Z","lastTransitionTime":"2026-02-18T00:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.876455 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.876559 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.876582 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.876606 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.876626 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:30Z","lastTransitionTime":"2026-02-18T00:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.979560 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.979633 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.979653 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.979742 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:30 crc kubenswrapper[4858]: I0218 00:35:30.979764 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:30Z","lastTransitionTime":"2026-02-18T00:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.083292 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.083349 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.083367 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.083394 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.083413 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:31Z","lastTransitionTime":"2026-02-18T00:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.186917 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.187113 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.187148 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.187177 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.187195 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:31Z","lastTransitionTime":"2026-02-18T00:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.290559 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.290643 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.290678 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.290708 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.290729 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:31Z","lastTransitionTime":"2026-02-18T00:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.319831 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.320013 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:31 crc kubenswrapper[4858]: E0218 00:35:31.320069 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:35.320037692 +0000 UTC m=+148.625874454 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.320125 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:31 crc kubenswrapper[4858]: E0218 00:35:31.320195 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.320215 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.320279 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:31 crc kubenswrapper[4858]: E0218 00:35:31.320323 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:35:31 crc kubenswrapper[4858]: E0218 00:35:31.320336 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:36:35.320321939 +0000 UTC m=+148.626158711 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:35:31 crc kubenswrapper[4858]: E0218 00:35:31.320421 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:36:35.320395891 +0000 UTC m=+148.626232653 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:35:31 crc kubenswrapper[4858]: E0218 00:35:31.320471 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:35:31 crc kubenswrapper[4858]: E0218 00:35:31.320553 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:35:31 crc kubenswrapper[4858]: E0218 00:35:31.320574 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:35:31 crc kubenswrapper[4858]: E0218 00:35:31.320626 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 00:36:35.320610396 +0000 UTC m=+148.626447158 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:35:31 crc kubenswrapper[4858]: E0218 00:35:31.320688 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:35:31 crc kubenswrapper[4858]: E0218 00:35:31.320739 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:35:31 crc kubenswrapper[4858]: E0218 00:35:31.320768 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:35:31 crc kubenswrapper[4858]: E0218 00:35:31.320843 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 00:36:35.320818641 +0000 UTC m=+148.626655463 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.393667 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.393738 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.393758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.393782 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.393803 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:31Z","lastTransitionTime":"2026-02-18T00:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.415258 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 16:12:04.642963166 +0000 UTC Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.418691 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.418789 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:31 crc kubenswrapper[4858]: E0218 00:35:31.418964 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.419009 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.419562 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:31 crc kubenswrapper[4858]: E0218 00:35:31.419779 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:31 crc kubenswrapper[4858]: E0218 00:35:31.419880 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:31 crc kubenswrapper[4858]: E0218 00:35:31.420100 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.420113 4858 scope.go:117] "RemoveContainer" containerID="1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.496883 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.496937 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.496953 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.496976 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.496992 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:31Z","lastTransitionTime":"2026-02-18T00:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.601057 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.601821 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.601980 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.602196 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.602355 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:31Z","lastTransitionTime":"2026-02-18T00:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.704760 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.704799 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.704810 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.704826 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.704837 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:31Z","lastTransitionTime":"2026-02-18T00:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.808048 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.808169 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.808193 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.808225 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.808248 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:31Z","lastTransitionTime":"2026-02-18T00:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.910937 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.911009 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.911032 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.911065 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.911087 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:31Z","lastTransitionTime":"2026-02-18T00:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.970218 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jjq7k_62c71780-47e7-4e14-9b93-60050f6f3141/ovnkube-controller/2.log" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.977243 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerStarted","Data":"654a3b3d31b37d872da214d23ebf83bf4ec4272b8ef12cd793af82bea158ce78"} Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.978182 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:35:31 crc kubenswrapper[4858]: I0218 00:35:31.995071 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:31Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.008527 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:32Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.013333 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.013378 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.013390 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.013407 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.013419 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:32Z","lastTransitionTime":"2026-02-18T00:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.021912 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:32Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.034283 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:32Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.050336 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0415bdcb0566e3795b0c5796437e9d0c35761f6836fab59e2de8a8448143c75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:32Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.063741 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:32Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.078932 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c99f89af61fd7a5275a9556f567446675b39146de6e8b14b5ec7c475c26a413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:35:20Z\\\",\\\"message\\\":\\\"2026-02-18T00:34:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5f0c0f4d-5a90-417a-be2e-03607ebc1c73\\\\n2026-02-18T00:34:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5f0c0f4d-5a90-417a-be2e-03607ebc1c73 to /host/opt/cni/bin/\\\\n2026-02-18T00:34:35Z [verbose] multus-daemon started\\\\n2026-02-18T00:34:35Z [verbose] Readiness Indicator file check\\\\n2026-02-18T00:35:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:32Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.092624 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:32Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.111922 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://654a3b3d31b37d872da214d23ebf83bf4ec4272b8ef12cd793af82bea158ce78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:59Z\\\",\\\"message\\\":\\\"ice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:59.393831 6520 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:34:59.394093 6520 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:59.394197 6520 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 00:34:59.394272 6520 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 00:34:59.394831 6520 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 00:34:59.394905 6520 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 00:34:59.394918 6520 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 00:34:59.394999 6520 factory.go:656] Stopping watch factory\\\\nI0218 00:34:59.395029 6520 ovnkube.go:599] Stopped ovnkube\\\\nI0218 00:34:59.395078 6520 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 00:34:59.395089 6520 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:32Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.115039 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.115070 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.115080 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.115092 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.115103 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:32Z","lastTransitionTime":"2026-02-18T00:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.126403 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c1a96ad-b27d-4347-9613-ecbf040905aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2840af40ad5521b99a904966a29d6021b2c6d194521b8a0887e06162ae3ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8652de78b1e8343e7c323df13f080deb9e31fad243abed2d7059161c69843a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bbc915ebeff415c6e1d4888ed16a1eba95ff7420756c40907c71b3652a3a166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5211dd317867170b65bf5493cb95d099a21e9498ccb2d7da269599defbf1f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b5211dd317867170b65bf5493cb95d099a21e9498ccb2d7da269599defbf1f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:32Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.140802 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:32Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.155723 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:32Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.170005 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c37420e-6ee9-4827-be9c-060d919663b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99103e5e72463f14ac12cb5d83f4ebdec29515a3d5082fb46a14b6152bb669f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5ce045b0b2d956d2ceae85fe58a0283e31d62b99a2618add17e477c8ebce86d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gnnml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:32Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.180617 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6889a266-e6ba-4995-8dea-6768bf9d6ad0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8023934d83dff5d479338ea0a0f8a97f50b10c8b8ac1fafe3a6f852e442981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f7ac7489e07b41d0a77d4812ea728d416a47fa2320ee63fa63de2cff00a520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f7ac7489e07b41d0a77d4812ea728d416a47fa2320ee63fa63de2cff00a520\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:32Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.200629 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:32Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.216670 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:32Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.217640 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.217699 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.217713 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.217732 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.217764 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:32Z","lastTransitionTime":"2026-02-18T00:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.233273 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:32Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.244055 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbdlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7064635a-c927-4499-98ce-76833fb5801c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbdlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:32Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.320346 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.320380 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.320410 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.320427 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.320440 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:32Z","lastTransitionTime":"2026-02-18T00:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.415617 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 05:28:02.419612391 +0000 UTC Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.422792 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.422819 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.422827 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.422839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.422848 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:32Z","lastTransitionTime":"2026-02-18T00:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.524269 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.524307 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.524318 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.524332 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.524344 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:32Z","lastTransitionTime":"2026-02-18T00:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.627298 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.627366 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.627389 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.627417 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.627439 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:32Z","lastTransitionTime":"2026-02-18T00:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.730588 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.730642 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.730658 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.730682 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.730697 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:32Z","lastTransitionTime":"2026-02-18T00:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.833419 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.833488 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.833537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.833563 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.833581 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:32Z","lastTransitionTime":"2026-02-18T00:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.936344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.936403 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.936423 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.936448 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.936467 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:32Z","lastTransitionTime":"2026-02-18T00:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.985007 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jjq7k_62c71780-47e7-4e14-9b93-60050f6f3141/ovnkube-controller/3.log" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.986415 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jjq7k_62c71780-47e7-4e14-9b93-60050f6f3141/ovnkube-controller/2.log" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.991716 4858 generic.go:334] "Generic (PLEG): container finished" podID="62c71780-47e7-4e14-9b93-60050f6f3141" containerID="654a3b3d31b37d872da214d23ebf83bf4ec4272b8ef12cd793af82bea158ce78" exitCode=1 Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.991787 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerDied","Data":"654a3b3d31b37d872da214d23ebf83bf4ec4272b8ef12cd793af82bea158ce78"} Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.992091 4858 scope.go:117] "RemoveContainer" containerID="1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36" Feb 18 00:35:32 crc kubenswrapper[4858]: I0218 00:35:32.992890 4858 scope.go:117] "RemoveContainer" containerID="654a3b3d31b37d872da214d23ebf83bf4ec4272b8ef12cd793af82bea158ce78" Feb 18 00:35:32 crc kubenswrapper[4858]: E0218 00:35:32.993172 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jjq7k_openshift-ovn-kubernetes(62c71780-47e7-4e14-9b93-60050f6f3141)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.017853 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.039030 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c99f89af61fd7a5275a9556f567446675b39146de6e8b14b5ec7c475c26a413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:35:20Z\\\",\\\"message\\\":\\\"2026-02-18T00:34:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5f0c0f4d-5a90-417a-be2e-03607ebc1c73\\\\n2026-02-18T00:34:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5f0c0f4d-5a90-417a-be2e-03607ebc1c73 to /host/opt/cni/bin/\\\\n2026-02-18T00:34:35Z [verbose] multus-daemon started\\\\n2026-02-18T00:34:35Z [verbose] Readiness Indicator file check\\\\n2026-02-18T00:35:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.047322 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.047408 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.047570 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.047613 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.047638 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:33Z","lastTransitionTime":"2026-02-18T00:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.063219 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.099097 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://654a3b3d31b37d872da214d23ebf83bf4ec4272b8ef12cd793af82bea158ce78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f85b344144f825b94f00dca0305e526fcf256e32d727c3fbf412d66521bef36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:34:59Z\\\",\\\"message\\\":\\\"ice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:59.393831 6520 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:34:59.394093 6520 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:34:59.394197 6520 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 00:34:59.394272 6520 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 00:34:59.394831 6520 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 00:34:59.394905 6520 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 00:34:59.394918 6520 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 00:34:59.394999 6520 factory.go:656] Stopping watch factory\\\\nI0218 00:34:59.395029 6520 ovnkube.go:599] Stopped ovnkube\\\\nI0218 00:34:59.395078 6520 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 00:34:59.395089 6520 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://654a3b3d31b37d872da214d23ebf83bf4ec4272b8ef12cd793af82bea158ce78\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:35:32Z\\\",\\\"message\\\":\\\"ver at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 00:35:32.351561 6959 services_controller.go:453] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics template LB for network=default: []services.LB{}\\\\nI0218 00:35:32.351602 6959 services_controller.go:454] Service openshift-operator-lifecycle-manager/catalog-operator-metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0218 00:35:32.351585 6959 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.246:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ab0b1d51-5ec6-479b-8881-93dfa8d30337}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0218 00:35:32.351675 6959 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.117124 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c1a96ad-b27d-4347-9613-ecbf040905aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2840af40ad5521b99a904966a29d6021b2c6d194521b8a0887e06162ae3ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8652de78b1e8343e7c323df13f080deb9e31fad243abed2d7059161c69843a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bbc915ebeff415c6e1d4888ed16a1eba95ff7420756c40907c71b3652a3a166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5211dd317867170b65bf5493cb95d099a21e9498ccb2d7da269599defbf1f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b5211dd317867170b65bf5493cb95d099a21e9498ccb2d7da269599defbf1f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.134348 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.151178 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.151228 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.151246 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.151273 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.151291 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:33Z","lastTransitionTime":"2026-02-18T00:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.154272 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.169738 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c37420e-6ee9-4827-be9c-060d919663b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99103e5e72463f14ac12cb5d83f4ebdec29515a3d5082fb46a14b6152bb669f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5ce045b0b2d956d2ceae85fe58a0283e31d62b99a2618add17e477c8ebce86d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gnnml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.185491 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6889a266-e6ba-4995-8dea-6768bf9d6ad0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8023934d83dff5d479338ea0a0f8a97f50b10c8b8ac1fafe3a6f852e442981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f7ac7489e07b41d0a77d4812ea728d416a47fa2320ee63fa63de2cff00a520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f7ac7489e07b41d0a77d4812ea728d416a47fa2320ee63fa63de2cff00a520\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.202337 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.220119 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.241054 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.253773 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.253834 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.253852 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.253877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.253898 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:33Z","lastTransitionTime":"2026-02-18T00:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.258960 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbdlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7064635a-c927-4499-98ce-76833fb5801c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbdlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.280862 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.300162 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.318573 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.333731 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.357344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.357448 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.357467 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.357519 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.357538 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:33Z","lastTransitionTime":"2026-02-18T00:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.359397 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0415bdcb0566e3795b0c5796437e9d0c35761f6836fab59e2de8a8448143c75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:33Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.416763 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 20:42:24.104475307 +0000 UTC Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.419056 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:33 crc kubenswrapper[4858]: E0218 00:35:33.419319 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.419382 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.419397 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.419653 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:33 crc kubenswrapper[4858]: E0218 00:35:33.419847 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:33 crc kubenswrapper[4858]: E0218 00:35:33.420085 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:33 crc kubenswrapper[4858]: E0218 00:35:33.420173 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.460124 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.460168 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.460188 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.460212 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.460230 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:33Z","lastTransitionTime":"2026-02-18T00:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.563143 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.563203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.563222 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.563245 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.563262 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:33Z","lastTransitionTime":"2026-02-18T00:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.666369 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.666418 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.666434 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.666458 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.666475 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:33Z","lastTransitionTime":"2026-02-18T00:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.770658 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.770728 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.770748 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.770774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.770797 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:33Z","lastTransitionTime":"2026-02-18T00:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.873726 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.873775 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.873793 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.873816 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.873832 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:33Z","lastTransitionTime":"2026-02-18T00:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.976892 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.976949 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.976967 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.976990 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.977006 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:33Z","lastTransitionTime":"2026-02-18T00:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:33 crc kubenswrapper[4858]: I0218 00:35:33.998083 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jjq7k_62c71780-47e7-4e14-9b93-60050f6f3141/ovnkube-controller/3.log" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.003398 4858 scope.go:117] "RemoveContainer" containerID="654a3b3d31b37d872da214d23ebf83bf4ec4272b8ef12cd793af82bea158ce78" Feb 18 00:35:34 crc kubenswrapper[4858]: E0218 00:35:34.003726 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jjq7k_openshift-ovn-kubernetes(62c71780-47e7-4e14-9b93-60050f6f3141)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.024717 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.045381 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.063828 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbdlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7064635a-c927-4499-98ce-76833fb5801c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbdlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.080320 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.080415 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.080440 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.080472 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.080528 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:34Z","lastTransitionTime":"2026-02-18T00:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.082483 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6889a266-e6ba-4995-8dea-6768bf9d6ad0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8023934d83dff5d479338ea0a0f8a97f50b10c8b8ac1fafe3a6f852e442981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f7ac7489e07b41d0a77d4812ea728d416a47fa2320ee63fa63de2cff00a520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f7ac7489e07b41d0a77d4812ea728d416a47fa2320ee63fa63de2cff00a520\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.106035 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.126989 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.142381 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.153596 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.171713 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0415bdcb0566e3795b0c5796437e9d0c35761f6836fab59e2de8a8448143c75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.183402 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.183470 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.183484 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.183518 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.183531 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:34Z","lastTransitionTime":"2026-02-18T00:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.192706 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.209470 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.238879 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://654a3b3d31b37d872da214d23ebf83bf4ec4272b8ef12cd793af82bea158ce78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://654a3b3d31b37d872da214d23ebf83bf4ec4272b8ef12cd793af82bea158ce78\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:35:32Z\\\",\\\"message\\\":\\\"ver at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 00:35:32.351561 6959 services_controller.go:453] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics template LB for network=default: []services.LB{}\\\\nI0218 00:35:32.351602 6959 services_controller.go:454] Service openshift-operator-lifecycle-manager/catalog-operator-metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0218 00:35:32.351585 6959 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.246:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ab0b1d51-5ec6-479b-8881-93dfa8d30337}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0218 00:35:32.351675 6959 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:35:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jjq7k_openshift-ovn-kubernetes(62c71780-47e7-4e14-9b93-60050f6f3141)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.261079 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.276516 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c99f89af61fd7a5275a9556f567446675b39146de6e8b14b5ec7c475c26a413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:35:20Z\\\",\\\"message\\\":\\\"2026-02-18T00:34:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5f0c0f4d-5a90-417a-be2e-03607ebc1c73\\\\n2026-02-18T00:34:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5f0c0f4d-5a90-417a-be2e-03607ebc1c73 to /host/opt/cni/bin/\\\\n2026-02-18T00:34:35Z [verbose] multus-daemon started\\\\n2026-02-18T00:34:35Z [verbose] Readiness Indicator file check\\\\n2026-02-18T00:35:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.285582 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.285625 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.285638 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.285658 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.285671 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:34Z","lastTransitionTime":"2026-02-18T00:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.288804 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.305936 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c37420e-6ee9-4827-be9c-060d919663b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99103e5e72463f14ac12cb5d83f4ebdec29515a3d5082fb46a14b6152bb669f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5ce045b0b2d956d2ceae85fe58a0283e31d62b99a2618add17e477c8ebce86d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gnnml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.321973 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c1a96ad-b27d-4347-9613-ecbf040905aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2840af40ad5521b99a904966a29d6021b2c6d194521b8a0887e06162ae3ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8652de78b1e8343e7c323df13f080deb9e31fad243abed2d7059161c69843a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bbc915ebeff415c6e1d4888ed16a1eba95ff7420756c40907c71b3652a3a166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5211dd317867170b65bf5493cb95d099a21e9498ccb2d7da269599defbf1f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b5211dd317867170b65bf5493cb95d099a21e9498ccb2d7da269599defbf1f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.334620 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.388164 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.388276 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.388299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.388329 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.388353 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:34Z","lastTransitionTime":"2026-02-18T00:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.417676 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 03:19:42.01496617 +0000 UTC Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.491173 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.491266 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.491290 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.491321 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.491343 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:34Z","lastTransitionTime":"2026-02-18T00:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.594392 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.594557 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.594768 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.594830 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.594854 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:34Z","lastTransitionTime":"2026-02-18T00:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.697925 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.698018 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.698042 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.698074 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.698133 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:34Z","lastTransitionTime":"2026-02-18T00:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.800807 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.800864 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.800881 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.800905 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.800921 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:34Z","lastTransitionTime":"2026-02-18T00:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.904231 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.904302 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.904327 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.904355 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:34 crc kubenswrapper[4858]: I0218 00:35:34.904380 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:34Z","lastTransitionTime":"2026-02-18T00:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.006931 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.006990 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.007009 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.007032 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.007048 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:35Z","lastTransitionTime":"2026-02-18T00:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.110665 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.110721 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.110738 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.110765 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.110783 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:35Z","lastTransitionTime":"2026-02-18T00:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.213805 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.213847 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.213857 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.213874 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.213885 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:35Z","lastTransitionTime":"2026-02-18T00:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.317020 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.317114 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.317138 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.317168 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.317195 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:35Z","lastTransitionTime":"2026-02-18T00:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.418470 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 03:58:16.516969034 +0000 UTC Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.418665 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:35 crc kubenswrapper[4858]: E0218 00:35:35.418822 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.418872 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.418952 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.418967 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:35 crc kubenswrapper[4858]: E0218 00:35:35.419106 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:35 crc kubenswrapper[4858]: E0218 00:35:35.419201 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:35 crc kubenswrapper[4858]: E0218 00:35:35.419383 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.420820 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.420891 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.420908 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.420928 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.420971 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:35Z","lastTransitionTime":"2026-02-18T00:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.524407 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.524480 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.524545 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.524576 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.524603 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:35Z","lastTransitionTime":"2026-02-18T00:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.628228 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.628308 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.628327 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.628353 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.628372 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:35Z","lastTransitionTime":"2026-02-18T00:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.731257 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.731323 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.731342 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.731366 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.731384 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:35Z","lastTransitionTime":"2026-02-18T00:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.834946 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.835007 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.835026 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.835052 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.835071 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:35Z","lastTransitionTime":"2026-02-18T00:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.938060 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.938121 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.938139 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.938163 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:35 crc kubenswrapper[4858]: I0218 00:35:35.938180 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:35Z","lastTransitionTime":"2026-02-18T00:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.041678 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.041731 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.041748 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.041774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.041791 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:36Z","lastTransitionTime":"2026-02-18T00:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.145158 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.145219 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.145238 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.145262 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.145278 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:36Z","lastTransitionTime":"2026-02-18T00:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.247920 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.247991 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.248013 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.248045 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.248067 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:36Z","lastTransitionTime":"2026-02-18T00:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.351238 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.351310 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.351333 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.351363 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.351385 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:36Z","lastTransitionTime":"2026-02-18T00:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.419014 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 05:15:38.326913039 +0000 UTC Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.455351 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.455417 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.455435 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.455462 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.455479 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:36Z","lastTransitionTime":"2026-02-18T00:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.561922 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.562430 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.562457 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.562487 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.562544 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:36Z","lastTransitionTime":"2026-02-18T00:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.666227 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.666291 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.666315 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.666346 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.666374 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:36Z","lastTransitionTime":"2026-02-18T00:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.769709 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.769788 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.769813 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.769837 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.769854 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:36Z","lastTransitionTime":"2026-02-18T00:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.873296 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.873341 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.873359 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.873381 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.873398 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:36Z","lastTransitionTime":"2026-02-18T00:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.976861 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.976917 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.976933 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.976957 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:36 crc kubenswrapper[4858]: I0218 00:35:36.976975 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:36Z","lastTransitionTime":"2026-02-18T00:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.079794 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.079865 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.079883 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.079914 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.079931 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:37Z","lastTransitionTime":"2026-02-18T00:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.182215 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.182291 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.182309 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.182332 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.182350 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:37Z","lastTransitionTime":"2026-02-18T00:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.285657 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.285728 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.285750 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.285778 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.285800 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:37Z","lastTransitionTime":"2026-02-18T00:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.389191 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.389258 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.389283 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.389314 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.389342 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:37Z","lastTransitionTime":"2026-02-18T00:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.418822 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.418944 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:37 crc kubenswrapper[4858]: E0218 00:35:37.419343 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.419427 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.419449 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:37 crc kubenswrapper[4858]: E0218 00:35:37.419546 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:37 crc kubenswrapper[4858]: E0218 00:35:37.419760 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:37 crc kubenswrapper[4858]: E0218 00:35:37.419940 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.419457 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 02:16:17.813771681 +0000 UTC Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.436908 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6889a266-e6ba-4995-8dea-6768bf9d6ad0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8023934d83dff5d479338ea0a0f8a97f50b10c8b8ac1fafe3a6f852e442981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f7ac7489e07b41d0a77d4812ea728d416a47fa2320ee63fa63de2cff00a520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f7ac7489e07b41d0a77d4812ea728d416a47fa2320ee63fa63de2cff00a520\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.458432 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.477705 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.494344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.494403 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.494422 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.494449 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.494468 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:37Z","lastTransitionTime":"2026-02-18T00:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.498242 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.515005 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbdlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7064635a-c927-4499-98ce-76833fb5801c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbdlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.539799 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0415bdcb0566e3795b0c5796437e9d0c35761f6836fab59e2de8a8448143c75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.564822 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.584441 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.597691 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.597742 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.597755 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.597782 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.597798 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:37Z","lastTransitionTime":"2026-02-18T00:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.603922 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.620840 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.641108 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.663221 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c99f89af61fd7a5275a9556f567446675b39146de6e8b14b5ec7c475c26a413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:35:20Z\\\",\\\"message\\\":\\\"2026-02-18T00:34:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5f0c0f4d-5a90-417a-be2e-03607ebc1c73\\\\n2026-02-18T00:34:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5f0c0f4d-5a90-417a-be2e-03607ebc1c73 to /host/opt/cni/bin/\\\\n2026-02-18T00:34:35Z [verbose] multus-daemon started\\\\n2026-02-18T00:34:35Z [verbose] Readiness Indicator file check\\\\n2026-02-18T00:35:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.681266 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.701680 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.701975 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.701994 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.702061 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.702082 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:37Z","lastTransitionTime":"2026-02-18T00:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.713721 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://654a3b3d31b37d872da214d23ebf83bf4ec4272b8ef12cd793af82bea158ce78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://654a3b3d31b37d872da214d23ebf83bf4ec4272b8ef12cd793af82bea158ce78\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:35:32Z\\\",\\\"message\\\":\\\"ver at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 00:35:32.351561 6959 services_controller.go:453] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics template LB for network=default: []services.LB{}\\\\nI0218 00:35:32.351602 6959 services_controller.go:454] Service openshift-operator-lifecycle-manager/catalog-operator-metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0218 00:35:32.351585 6959 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.246:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ab0b1d51-5ec6-479b-8881-93dfa8d30337}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0218 00:35:32.351675 6959 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:35:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jjq7k_openshift-ovn-kubernetes(62c71780-47e7-4e14-9b93-60050f6f3141)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.732407 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c1a96ad-b27d-4347-9613-ecbf040905aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2840af40ad5521b99a904966a29d6021b2c6d194521b8a0887e06162ae3ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8652de78b1e8343e7c323df13f080deb9e31fad243abed2d7059161c69843a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bbc915ebeff415c6e1d4888ed16a1eba95ff7420756c40907c71b3652a3a166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5211dd317867170b65bf5493cb95d099a21e9498ccb2d7da269599defbf1f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b5211dd317867170b65bf5493cb95d099a21e9498ccb2d7da269599defbf1f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.752750 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.769841 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.789670 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c37420e-6ee9-4827-be9c-060d919663b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99103e5e72463f14ac12cb5d83f4ebdec29515a3d5082fb46a14b6152bb669f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5ce045b0b2d956d2ceae85fe58a0283e31d62b99a2618add17e477c8ebce86d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gnnml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.805721 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.805794 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.805815 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.805839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.805859 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:37Z","lastTransitionTime":"2026-02-18T00:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.910644 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.910721 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.910744 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.910777 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:37 crc kubenswrapper[4858]: I0218 00:35:37.910801 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:37Z","lastTransitionTime":"2026-02-18T00:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.013058 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.013122 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.013136 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.013161 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.013179 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:38Z","lastTransitionTime":"2026-02-18T00:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.116592 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.116673 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.116697 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.116726 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.116747 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:38Z","lastTransitionTime":"2026-02-18T00:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.220548 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.220615 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.220637 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.220670 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.220693 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:38Z","lastTransitionTime":"2026-02-18T00:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.323689 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.323762 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.323787 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.323814 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.323828 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:38Z","lastTransitionTime":"2026-02-18T00:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.420052 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 11:31:03.904722756 +0000 UTC Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.426655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.426713 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.426731 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.426791 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.426809 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:38Z","lastTransitionTime":"2026-02-18T00:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.530198 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.530251 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.530272 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.530299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.530346 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:38Z","lastTransitionTime":"2026-02-18T00:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.632667 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.632734 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.632751 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.632774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.632794 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:38Z","lastTransitionTime":"2026-02-18T00:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.736109 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.736169 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.736188 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.736215 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.736235 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:38Z","lastTransitionTime":"2026-02-18T00:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.839002 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.839053 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.839070 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.839098 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.839115 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:38Z","lastTransitionTime":"2026-02-18T00:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.878679 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.878736 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.878753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.878778 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.878795 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:38Z","lastTransitionTime":"2026-02-18T00:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:38 crc kubenswrapper[4858]: E0218 00:35:38.901741 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:38Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.907425 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.907471 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.907488 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.907538 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.907555 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:38Z","lastTransitionTime":"2026-02-18T00:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:38 crc kubenswrapper[4858]: E0218 00:35:38.925824 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:38Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.931467 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.931569 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.931593 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.931617 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.931634 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:38Z","lastTransitionTime":"2026-02-18T00:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:38 crc kubenswrapper[4858]: E0218 00:35:38.954775 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:38Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.959217 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.959274 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.959291 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.959316 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.959334 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:38Z","lastTransitionTime":"2026-02-18T00:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:38 crc kubenswrapper[4858]: E0218 00:35:38.978321 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:38Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.984691 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.984756 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.984778 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.984803 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:38 crc kubenswrapper[4858]: I0218 00:35:38.984820 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:38Z","lastTransitionTime":"2026-02-18T00:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:39 crc kubenswrapper[4858]: E0218 00:35:39.005273 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:39Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:39 crc kubenswrapper[4858]: E0218 00:35:39.005563 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.007678 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.007731 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.007749 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.007773 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.007790 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:39Z","lastTransitionTime":"2026-02-18T00:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.109854 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.109894 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.109907 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.109926 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.109939 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:39Z","lastTransitionTime":"2026-02-18T00:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.211918 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.211949 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.211961 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.211974 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.211984 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:39Z","lastTransitionTime":"2026-02-18T00:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.314787 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.314838 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.314856 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.314879 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.314897 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:39Z","lastTransitionTime":"2026-02-18T00:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.417895 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.417920 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.417933 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.417947 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.417958 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:39Z","lastTransitionTime":"2026-02-18T00:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.418613 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.418689 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:39 crc kubenswrapper[4858]: E0218 00:35:39.418733 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.418763 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.418794 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:39 crc kubenswrapper[4858]: E0218 00:35:39.418875 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:39 crc kubenswrapper[4858]: E0218 00:35:39.418966 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:39 crc kubenswrapper[4858]: E0218 00:35:39.419071 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.420282 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 02:55:28.593274834 +0000 UTC Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.521224 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.521272 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.521289 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.521311 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.521328 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:39Z","lastTransitionTime":"2026-02-18T00:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.624221 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.624279 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.624298 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.624321 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.624338 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:39Z","lastTransitionTime":"2026-02-18T00:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.726916 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.726996 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.727009 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.727026 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.727037 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:39Z","lastTransitionTime":"2026-02-18T00:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.829643 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.829730 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.829753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.829784 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.829808 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:39Z","lastTransitionTime":"2026-02-18T00:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.933200 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.933260 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.933281 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.933308 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:39 crc kubenswrapper[4858]: I0218 00:35:39.933326 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:39Z","lastTransitionTime":"2026-02-18T00:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.035884 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.035945 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.035970 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.036002 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.036024 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:40Z","lastTransitionTime":"2026-02-18T00:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.139259 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.139320 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.139340 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.139363 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.139380 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:40Z","lastTransitionTime":"2026-02-18T00:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.242162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.242252 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.242278 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.242316 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.242342 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:40Z","lastTransitionTime":"2026-02-18T00:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.345962 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.346060 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.346089 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.346123 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.346142 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:40Z","lastTransitionTime":"2026-02-18T00:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.420949 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 18:41:58.934916975 +0000 UTC Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.448953 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.449020 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.449034 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.449051 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.449062 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:40Z","lastTransitionTime":"2026-02-18T00:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.552223 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.552284 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.552307 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.552338 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.552359 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:40Z","lastTransitionTime":"2026-02-18T00:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.655621 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.655700 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.655725 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.655756 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.655780 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:40Z","lastTransitionTime":"2026-02-18T00:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.759451 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.759547 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.759566 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.759592 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.759611 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:40Z","lastTransitionTime":"2026-02-18T00:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.862918 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.862992 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.863013 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.863045 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.863068 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:40Z","lastTransitionTime":"2026-02-18T00:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.965552 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.965613 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.965631 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.965655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:40 crc kubenswrapper[4858]: I0218 00:35:40.965707 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:40Z","lastTransitionTime":"2026-02-18T00:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.068765 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.068842 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.068860 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.068886 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.068904 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:41Z","lastTransitionTime":"2026-02-18T00:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.172054 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.172098 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.172109 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.172124 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.172133 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:41Z","lastTransitionTime":"2026-02-18T00:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.275805 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.275890 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.275915 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.275946 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.275968 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:41Z","lastTransitionTime":"2026-02-18T00:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.378798 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.378896 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.378920 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.378951 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.378974 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:41Z","lastTransitionTime":"2026-02-18T00:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.419170 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.419193 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.419169 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.419232 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:41 crc kubenswrapper[4858]: E0218 00:35:41.419409 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:41 crc kubenswrapper[4858]: E0218 00:35:41.419623 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:41 crc kubenswrapper[4858]: E0218 00:35:41.419744 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:41 crc kubenswrapper[4858]: E0218 00:35:41.419999 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.421614 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 11:29:19.503640217 +0000 UTC Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.482194 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.482288 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.482325 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.482359 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.482382 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:41Z","lastTransitionTime":"2026-02-18T00:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.585837 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.585897 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.585920 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.585948 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.585969 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:41Z","lastTransitionTime":"2026-02-18T00:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.689098 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.689165 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.689183 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.689211 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.689230 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:41Z","lastTransitionTime":"2026-02-18T00:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.792973 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.793055 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.793076 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.793148 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.793172 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:41Z","lastTransitionTime":"2026-02-18T00:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.896156 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.896221 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.896237 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.896262 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:41 crc kubenswrapper[4858]: I0218 00:35:41.896286 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:41Z","lastTransitionTime":"2026-02-18T00:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.000399 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.000470 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.000489 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.000553 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.000571 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:42Z","lastTransitionTime":"2026-02-18T00:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.103344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.103411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.103430 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.103458 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.103478 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:42Z","lastTransitionTime":"2026-02-18T00:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.206695 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.206770 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.206791 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.207004 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.207026 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:42Z","lastTransitionTime":"2026-02-18T00:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.310723 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.310813 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.310839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.310868 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.310887 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:42Z","lastTransitionTime":"2026-02-18T00:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.413928 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.414028 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.414052 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.414084 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.414101 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:42Z","lastTransitionTime":"2026-02-18T00:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.422423 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 13:30:14.471757519 +0000 UTC Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.517648 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.517709 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.517725 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.517747 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.517765 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:42Z","lastTransitionTime":"2026-02-18T00:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.621179 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.621245 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.621263 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.621288 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.621307 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:42Z","lastTransitionTime":"2026-02-18T00:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.723373 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.723426 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.723443 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.723468 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.723488 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:42Z","lastTransitionTime":"2026-02-18T00:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.826318 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.826394 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.826412 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.826434 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.826451 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:42Z","lastTransitionTime":"2026-02-18T00:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.930124 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.930205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.930225 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.930253 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:42 crc kubenswrapper[4858]: I0218 00:35:42.930273 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:42Z","lastTransitionTime":"2026-02-18T00:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.033323 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.033390 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.033415 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.033447 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.033466 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:43Z","lastTransitionTime":"2026-02-18T00:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.137232 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.137293 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.137316 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.137344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.137368 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:43Z","lastTransitionTime":"2026-02-18T00:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.240479 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.240566 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.240582 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.240606 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.240624 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:43Z","lastTransitionTime":"2026-02-18T00:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.343892 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.343933 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.343944 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.343961 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.343972 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:43Z","lastTransitionTime":"2026-02-18T00:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.418833 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.418941 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.418837 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:43 crc kubenswrapper[4858]: E0218 00:35:43.419009 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.419158 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:43 crc kubenswrapper[4858]: E0218 00:35:43.419299 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:43 crc kubenswrapper[4858]: E0218 00:35:43.419429 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:43 crc kubenswrapper[4858]: E0218 00:35:43.419583 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.422879 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 11:17:04.012194184 +0000 UTC Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.446383 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.446443 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.446460 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.446485 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.446532 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:43Z","lastTransitionTime":"2026-02-18T00:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.550646 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.550738 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.550758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.550847 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.550868 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:43Z","lastTransitionTime":"2026-02-18T00:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.653747 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.653781 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.653789 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.653801 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.653810 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:43Z","lastTransitionTime":"2026-02-18T00:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.756134 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.756199 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.756219 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.756244 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.756262 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:43Z","lastTransitionTime":"2026-02-18T00:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.858771 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.858835 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.858859 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.858885 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.858904 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:43Z","lastTransitionTime":"2026-02-18T00:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.963047 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.963105 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.963122 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.963145 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:43 crc kubenswrapper[4858]: I0218 00:35:43.963163 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:43Z","lastTransitionTime":"2026-02-18T00:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.065733 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.065807 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.065830 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.065864 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.065883 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:44Z","lastTransitionTime":"2026-02-18T00:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.169591 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.169654 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.169673 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.169697 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.169717 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:44Z","lastTransitionTime":"2026-02-18T00:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.272788 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.272869 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.272893 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.272923 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.272946 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:44Z","lastTransitionTime":"2026-02-18T00:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.375756 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.375827 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.375849 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.375879 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.375901 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:44Z","lastTransitionTime":"2026-02-18T00:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.423889 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 03:20:50.988235933 +0000 UTC Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.479324 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.479385 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.479402 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.479426 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.479443 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:44Z","lastTransitionTime":"2026-02-18T00:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.582085 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.582161 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.582184 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.582213 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.582233 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:44Z","lastTransitionTime":"2026-02-18T00:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.685864 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.685921 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.685939 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.685966 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.685985 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:44Z","lastTransitionTime":"2026-02-18T00:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.788357 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.788425 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.788442 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.788466 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.788488 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:44Z","lastTransitionTime":"2026-02-18T00:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.891291 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.891344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.891361 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.891383 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.891400 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:44Z","lastTransitionTime":"2026-02-18T00:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.994031 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.994081 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.994098 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.994120 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:44 crc kubenswrapper[4858]: I0218 00:35:44.994137 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:44Z","lastTransitionTime":"2026-02-18T00:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.097232 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.097280 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.097297 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.097318 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.097335 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:45Z","lastTransitionTime":"2026-02-18T00:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.199850 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.199911 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.199927 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.199950 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.199967 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:45Z","lastTransitionTime":"2026-02-18T00:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.303377 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.303445 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.303466 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.303491 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.303579 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:45Z","lastTransitionTime":"2026-02-18T00:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.407051 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.407116 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.407135 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.407162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.407179 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:45Z","lastTransitionTime":"2026-02-18T00:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.419160 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.419203 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.419260 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:45 crc kubenswrapper[4858]: E0218 00:35:45.419362 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.419404 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:45 crc kubenswrapper[4858]: E0218 00:35:45.419614 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:45 crc kubenswrapper[4858]: E0218 00:35:45.419789 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:45 crc kubenswrapper[4858]: E0218 00:35:45.419927 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.424017 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 20:43:49.05269175 +0000 UTC Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.509713 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.509774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.509803 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.509826 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.509845 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:45Z","lastTransitionTime":"2026-02-18T00:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.612706 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.612790 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.612817 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.612846 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.612863 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:45Z","lastTransitionTime":"2026-02-18T00:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.715601 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.715678 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.715702 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.715733 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.715759 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:45Z","lastTransitionTime":"2026-02-18T00:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.819388 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.819437 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.819449 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.819469 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.819480 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:45Z","lastTransitionTime":"2026-02-18T00:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.922595 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.922658 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.922677 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.922702 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:45 crc kubenswrapper[4858]: I0218 00:35:45.922720 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:45Z","lastTransitionTime":"2026-02-18T00:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.025895 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.025963 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.025981 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.026008 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.026026 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:46Z","lastTransitionTime":"2026-02-18T00:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.128844 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.128897 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.128906 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.128923 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.128934 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:46Z","lastTransitionTime":"2026-02-18T00:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.232453 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.232538 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.232557 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.232581 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.232597 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:46Z","lastTransitionTime":"2026-02-18T00:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.335639 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.335715 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.335738 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.335768 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.335788 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:46Z","lastTransitionTime":"2026-02-18T00:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.425088 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 03:25:42.931373065 +0000 UTC Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.438817 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.438882 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.438900 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.438925 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.438944 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:46Z","lastTransitionTime":"2026-02-18T00:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.541900 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.541964 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.541988 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.542017 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.542037 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:46Z","lastTransitionTime":"2026-02-18T00:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.644446 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.644529 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.644548 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.644574 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.644596 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:46Z","lastTransitionTime":"2026-02-18T00:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.748187 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.748244 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.748260 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.748285 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.748304 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:46Z","lastTransitionTime":"2026-02-18T00:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.850997 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.851082 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.851100 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.851123 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.851140 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:46Z","lastTransitionTime":"2026-02-18T00:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.954567 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.954645 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.954662 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.955138 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:46 crc kubenswrapper[4858]: I0218 00:35:46.955187 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:46Z","lastTransitionTime":"2026-02-18T00:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.057638 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.057682 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.057694 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.057711 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.057723 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:47Z","lastTransitionTime":"2026-02-18T00:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.161651 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.161729 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.161751 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.161783 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.161808 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:47Z","lastTransitionTime":"2026-02-18T00:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.264846 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.264908 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.264931 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.264963 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.264985 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:47Z","lastTransitionTime":"2026-02-18T00:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.368173 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.368235 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.368251 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.368276 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.368292 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:47Z","lastTransitionTime":"2026-02-18T00:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.419611 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.419725 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.420012 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:47 crc kubenswrapper[4858]: E0218 00:35:47.420159 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.420537 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.420707 4858 scope.go:117] "RemoveContainer" containerID="654a3b3d31b37d872da214d23ebf83bf4ec4272b8ef12cd793af82bea158ce78" Feb 18 00:35:47 crc kubenswrapper[4858]: E0218 00:35:47.420977 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jjq7k_openshift-ovn-kubernetes(62c71780-47e7-4e14-9b93-60050f6f3141)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" Feb 18 00:35:47 crc kubenswrapper[4858]: E0218 00:35:47.422071 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:47 crc kubenswrapper[4858]: E0218 00:35:47.422275 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:47 crc kubenswrapper[4858]: E0218 00:35:47.422561 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.425798 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 03:52:24.357562215 +0000 UTC Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.442481 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c1a96ad-b27d-4347-9613-ecbf040905aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2840af40ad5521b99a904966a29d6021b2c6d194521b8a0887e06162ae3ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8652de78b1e8343e7c323df13f080deb9e31fad243abed2d7059161c69843a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bbc915ebeff415c6e1d4888ed16a1eba95ff7420756c40907c71b3652a3a166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b5211dd317867170b65bf5493cb95d099a21e9498ccb2d7da269599defbf1f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b5211dd317867170b65bf5493cb95d099a21e9498ccb2d7da269599defbf1f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.464279 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.470710 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.471226 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.471249 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.471421 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.471455 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:47Z","lastTransitionTime":"2026-02-18T00:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.482898 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v2whc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f362c73a-7069-42a2-b85e-4e823a1a8fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30803587a6b17cfe3d128f9a491ba1e020ca5dc3938f43c58f918c2dfc2ee944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j42x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v2whc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.501908 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c37420e-6ee9-4827-be9c-060d919663b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99103e5e72463f14ac12cb5d83f4ebdec29515a3d5082fb46a14b6152bb669f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5ce045b0b2d956d2ceae85fe58a0283e31d62b99a2618add17e477c8ebce86d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gnnml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.518733 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6889a266-e6ba-4995-8dea-6768bf9d6ad0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8023934d83dff5d479338ea0a0f8a97f50b10c8b8ac1fafe3a6f852e442981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61f7ac7489e07b41d0a77d4812ea728d416a47fa2320ee63fa63de2cff00a520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f7ac7489e07b41d0a77d4812ea728d416a47fa2320ee63fa63de2cff00a520\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.538766 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff5467e3be8531aa538fc5fda38407abdccbb49ac67f0e8681f9931a7c867eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.560986 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.574894 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.574946 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.574964 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.574989 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.575155 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:47Z","lastTransitionTime":"2026-02-18T00:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.576331 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://010688f67c70af03908591eee6c2524a06e4ce5363a52187b5ae732cb2caf6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bb4ee37fde9d0c68fee2d61053f071d8c6103d64741e147525dd0868dc60f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.589413 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbdlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7064635a-c927-4499-98ce-76833fb5801c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2v8lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbdlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.606762 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24aebe5-ff91-47a8-b642-d7dcc25f9089\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0415bdcb0566e3795b0c5796437e9d0c35761f6836fab59e2de8a8448143c75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d685e476ff0c8c3b4025cd9f19054732b10d79f52753e3ed5cdc76616bea4665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e846935d32c0bee360594aae297d82a0d48bdbeed07a9e94c5e4e9e619b3cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb1cbf08c4df552d2d0506589786bc44a18d8dbd9fd445fbda9ee4a165d5f50a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444c4b7abfc24723bc00dc5da98eb0a9cab7e6cd4ad1f1e8561a77eaebd537e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489266736fc457a50af4f05345bf4fb10da3b074df9890d558117673a44f7b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21dc2f9f6e60fd64837c83b850317f4077aff81e54cf3423ff729936bfc515e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n62vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n4pmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.619658 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95ba33b5-7799-44ab-8de6-451433944bb8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:34:26Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 00:34:21.063317 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 00:34:21.066648 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-369112733/tls.crt::/tmp/serving-cert-369112733/tls.key\\\\\\\"\\\\nI0218 00:34:26.594020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:34:26.597919 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:34:26.597953 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:34:26.597981 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:34:26.597991 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:34:26.607564 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 00:34:26.607608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607617 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:34:26.607626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:34:26.607632 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:34:26.607638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:34:26.607643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 00:34:26.608000 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 00:34:26.611817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.634045 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"587c54fe-b6f7-4e4c-8c26-082945b208eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b681be8934ea25565b7a30cea7aee43891617ab6e5146c5a988d5df2a3df0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02d2c1f83d56bceec6b2f68a5a3b015335bd48724adbc35f631b6d5d1ac8e2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f181b58866233443352181b936fcd870c779f99b69901dcf4935c12808646af0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.644599 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3312154808e9e075ae206340a1eb6e02763bb69e2af997d8beac51f222b1abe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.656253 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jgxjq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e0eed0-c83b-4418-9587-7175dec43dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf85d4bbb8a7b2a7f2a97248839662dd01527940b865f1104abb3170c0749a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcgtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jgxjq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.667541 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.677954 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.677999 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.678010 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.678028 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.678039 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:47Z","lastTransitionTime":"2026-02-18T00:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.683961 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sr8bs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"631d8e25-82dd-4462-b98d-f076e7264b67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c99f89af61fd7a5275a9556f567446675b39146de6e8b14b5ec7c475c26a413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:35:20Z\\\",\\\"message\\\":\\\"2026-02-18T00:34:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5f0c0f4d-5a90-417a-be2e-03607ebc1c73\\\\n2026-02-18T00:34:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5f0c0f4d-5a90-417a-be2e-03607ebc1c73 to /host/opt/cni/bin/\\\\n2026-02-18T00:34:35Z [verbose] multus-daemon started\\\\n2026-02-18T00:34:35Z [verbose] Readiness Indicator file check\\\\n2026-02-18T00:35:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bt52b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sr8bs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.695880 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7172df49-6116-4968-a2b5-a1afb116568b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d0f39ca8ff53a42eaf7f611d9402018a16a286c3d9a13ff89612f4cde3b3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4snxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cbdbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.718316 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62c71780-47e7-4e14-9b93-60050f6f3141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://654a3b3d31b37d872da214d23ebf83bf4ec4272b8ef12cd793af82bea158ce78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://654a3b3d31b37d872da214d23ebf83bf4ec4272b8ef12cd793af82bea158ce78\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:35:32Z\\\",\\\"message\\\":\\\"ver at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 00:35:32.351561 6959 services_controller.go:453] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics template LB for network=default: []services.LB{}\\\\nI0218 00:35:32.351602 6959 services_controller.go:454] Service openshift-operator-lifecycle-manager/catalog-operator-metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0218 00:35:32.351585 6959 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.246:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ab0b1d51-5ec6-479b-8881-93dfa8d30337}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0218 00:35:32.351675 6959 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:35:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jjq7k_openshift-ovn-kubernetes(62c71780-47e7-4e14-9b93-60050f6f3141)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jjq7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.780209 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.780600 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.780756 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.781039 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.781264 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:47Z","lastTransitionTime":"2026-02-18T00:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.884841 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.884915 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.884939 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.884969 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.884993 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:47Z","lastTransitionTime":"2026-02-18T00:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.987794 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.987860 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.987884 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.987915 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:47 crc kubenswrapper[4858]: I0218 00:35:47.987937 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:47Z","lastTransitionTime":"2026-02-18T00:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.090667 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.090742 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.090760 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.090788 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.090805 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:48Z","lastTransitionTime":"2026-02-18T00:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.193897 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.193971 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.193995 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.194029 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.194052 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:48Z","lastTransitionTime":"2026-02-18T00:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.297102 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.297153 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.297170 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.297194 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.297216 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:48Z","lastTransitionTime":"2026-02-18T00:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.401334 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.401413 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.401440 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.401472 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.401533 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:48Z","lastTransitionTime":"2026-02-18T00:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.426489 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 04:41:59.760373279 +0000 UTC Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.504531 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.504593 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.504610 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.504636 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.504659 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:48Z","lastTransitionTime":"2026-02-18T00:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.608364 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.608447 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.608469 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.608532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.608558 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:48Z","lastTransitionTime":"2026-02-18T00:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.711617 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.711705 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.711728 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.711757 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.711779 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:48Z","lastTransitionTime":"2026-02-18T00:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.815378 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.815450 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.815473 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.815549 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.815576 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:48Z","lastTransitionTime":"2026-02-18T00:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.918444 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.918628 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.918653 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.918678 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:48 crc kubenswrapper[4858]: I0218 00:35:48.918695 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:48Z","lastTransitionTime":"2026-02-18T00:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.021822 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.021881 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.021905 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.021932 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.021953 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:49Z","lastTransitionTime":"2026-02-18T00:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.125355 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.125433 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.125456 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.125485 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.125540 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:49Z","lastTransitionTime":"2026-02-18T00:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.200162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.200232 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.200258 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.200286 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.200308 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:49Z","lastTransitionTime":"2026-02-18T00:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:49 crc kubenswrapper[4858]: E0218 00:35:49.221785 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.227120 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.227180 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.227198 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.227221 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.227239 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:49Z","lastTransitionTime":"2026-02-18T00:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:49 crc kubenswrapper[4858]: E0218 00:35:49.249702 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.255065 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.255145 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.255165 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.255190 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.255210 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:49Z","lastTransitionTime":"2026-02-18T00:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:49 crc kubenswrapper[4858]: E0218 00:35:49.275632 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.280383 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.280447 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.280459 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.280510 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.280524 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:49Z","lastTransitionTime":"2026-02-18T00:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:49 crc kubenswrapper[4858]: E0218 00:35:49.299308 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.305096 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.305155 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.305175 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.305201 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.305225 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:49Z","lastTransitionTime":"2026-02-18T00:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:49 crc kubenswrapper[4858]: E0218 00:35:49.326357 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:35:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6349ead0-20de-4c0d-9a78-8877524d5e2e\\\",\\\"systemUUID\\\":\\\"9d2e5599-fe23-41b1-a47a-55e31a585d4f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:35:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:35:49 crc kubenswrapper[4858]: E0218 00:35:49.326621 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.328149 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.328209 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.328231 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.328257 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.328276 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:49Z","lastTransitionTime":"2026-02-18T00:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.418927 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.418975 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.419237 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.419238 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:49 crc kubenswrapper[4858]: E0218 00:35:49.419371 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:49 crc kubenswrapper[4858]: E0218 00:35:49.419155 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:49 crc kubenswrapper[4858]: E0218 00:35:49.419546 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:49 crc kubenswrapper[4858]: E0218 00:35:49.419846 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.426983 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 03:24:38.553820968 +0000 UTC Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.431169 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.431222 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.431239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.431265 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.431282 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:49Z","lastTransitionTime":"2026-02-18T00:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.534610 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.534684 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.534700 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.534725 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.534743 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:49Z","lastTransitionTime":"2026-02-18T00:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.638257 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.638314 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.638330 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.638353 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.638369 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:49Z","lastTransitionTime":"2026-02-18T00:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.742312 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.742382 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.742403 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.742432 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.742454 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:49Z","lastTransitionTime":"2026-02-18T00:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.845955 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.846016 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.846035 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.846060 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.846077 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:49Z","lastTransitionTime":"2026-02-18T00:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.949743 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.949814 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.949832 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.949860 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:49 crc kubenswrapper[4858]: I0218 00:35:49.949878 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:49Z","lastTransitionTime":"2026-02-18T00:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.052940 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.053011 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.053031 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.053056 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.053073 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:50Z","lastTransitionTime":"2026-02-18T00:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.156610 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.156676 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.156693 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.156718 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.156735 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:50Z","lastTransitionTime":"2026-02-18T00:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.259745 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.259831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.259848 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.259873 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.259890 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:50Z","lastTransitionTime":"2026-02-18T00:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.363743 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.363811 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.363829 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.363854 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.363997 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:50Z","lastTransitionTime":"2026-02-18T00:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.427802 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 18:10:17.59789453 +0000 UTC Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.467717 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.467787 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.467798 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.467816 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.467828 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:50Z","lastTransitionTime":"2026-02-18T00:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.571372 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.571455 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.571479 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.571544 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.571568 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:50Z","lastTransitionTime":"2026-02-18T00:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.674341 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.674401 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.674420 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.674446 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.674463 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:50Z","lastTransitionTime":"2026-02-18T00:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.777885 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.777933 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.777950 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.777974 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.777990 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:50Z","lastTransitionTime":"2026-02-18T00:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.880222 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.880280 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.880297 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.880325 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.880341 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:50Z","lastTransitionTime":"2026-02-18T00:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.983681 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.983737 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.983753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.983775 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:50 crc kubenswrapper[4858]: I0218 00:35:50.983792 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:50Z","lastTransitionTime":"2026-02-18T00:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.086475 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.086658 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.086682 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.086704 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.086721 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:51Z","lastTransitionTime":"2026-02-18T00:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.189541 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.189608 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.189626 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.189652 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.189670 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:51Z","lastTransitionTime":"2026-02-18T00:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.292590 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.292671 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.292694 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.292720 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.292741 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:51Z","lastTransitionTime":"2026-02-18T00:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.396185 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.396259 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.396286 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.396315 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.396336 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:51Z","lastTransitionTime":"2026-02-18T00:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.419290 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.419380 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:51 crc kubenswrapper[4858]: E0218 00:35:51.419481 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.419540 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.419552 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:51 crc kubenswrapper[4858]: E0218 00:35:51.419712 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:51 crc kubenswrapper[4858]: E0218 00:35:51.419830 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:51 crc kubenswrapper[4858]: E0218 00:35:51.419973 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.428196 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 22:40:26.540490812 +0000 UTC Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.499184 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.499211 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.499218 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.499232 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.499242 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:51Z","lastTransitionTime":"2026-02-18T00:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.601866 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.601950 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.601970 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.601998 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.602015 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:51Z","lastTransitionTime":"2026-02-18T00:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.704849 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.704911 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.704926 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.704946 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.704961 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:51Z","lastTransitionTime":"2026-02-18T00:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.807385 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.807812 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.807975 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.808142 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.808281 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:51Z","lastTransitionTime":"2026-02-18T00:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.862704 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs\") pod \"network-metrics-daemon-jbdlz\" (UID: \"7064635a-c927-4499-98ce-76833fb5801c\") " pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:51 crc kubenswrapper[4858]: E0218 00:35:51.863112 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:35:51 crc kubenswrapper[4858]: E0218 00:35:51.863257 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs podName:7064635a-c927-4499-98ce-76833fb5801c nodeName:}" failed. No retries permitted until 2026-02-18 00:36:55.863239913 +0000 UTC m=+169.169076645 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs") pod "network-metrics-daemon-jbdlz" (UID: "7064635a-c927-4499-98ce-76833fb5801c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.911026 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.911091 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.911117 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.911150 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:51 crc kubenswrapper[4858]: I0218 00:35:51.911171 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:51Z","lastTransitionTime":"2026-02-18T00:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.013788 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.014224 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.014418 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.014642 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.014791 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:52Z","lastTransitionTime":"2026-02-18T00:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.118570 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.118642 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.118681 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.118713 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.118736 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:52Z","lastTransitionTime":"2026-02-18T00:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.221158 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.221213 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.221224 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.221274 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.221285 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:52Z","lastTransitionTime":"2026-02-18T00:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.323713 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.323780 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.323798 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.323821 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.323839 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:52Z","lastTransitionTime":"2026-02-18T00:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.426579 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.426645 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.426662 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.426685 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.426702 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:52Z","lastTransitionTime":"2026-02-18T00:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.428854 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 10:35:27.554471277 +0000 UTC Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.439528 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.529214 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.529242 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.529253 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.529268 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.529278 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:52Z","lastTransitionTime":"2026-02-18T00:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.631792 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.631834 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.631844 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.631859 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.631871 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:52Z","lastTransitionTime":"2026-02-18T00:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.734211 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.734250 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.734260 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.734276 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.734284 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:52Z","lastTransitionTime":"2026-02-18T00:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.836776 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.836816 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.836824 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.836839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.836848 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:52Z","lastTransitionTime":"2026-02-18T00:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.939935 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.939977 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.939985 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.940000 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:52 crc kubenswrapper[4858]: I0218 00:35:52.940011 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:52Z","lastTransitionTime":"2026-02-18T00:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.042671 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.042718 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.042730 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.042745 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.042759 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:53Z","lastTransitionTime":"2026-02-18T00:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.145837 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.145921 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.145947 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.145973 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.145994 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:53Z","lastTransitionTime":"2026-02-18T00:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.248911 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.249269 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.249411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.249606 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.249766 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:53Z","lastTransitionTime":"2026-02-18T00:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.352719 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.352779 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.352795 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.352829 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.352865 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:53Z","lastTransitionTime":"2026-02-18T00:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.419065 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:53 crc kubenswrapper[4858]: E0218 00:35:53.419467 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.419882 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.419887 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.419972 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:53 crc kubenswrapper[4858]: E0218 00:35:53.420115 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:53 crc kubenswrapper[4858]: E0218 00:35:53.420785 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:53 crc kubenswrapper[4858]: E0218 00:35:53.420923 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.428982 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 02:10:41.589383989 +0000 UTC Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.455388 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.455462 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.455476 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.455517 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.455534 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:53Z","lastTransitionTime":"2026-02-18T00:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.558992 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.559047 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.559056 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.559075 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.559088 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:53Z","lastTransitionTime":"2026-02-18T00:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.661729 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.661807 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.661825 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.661849 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.661869 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:53Z","lastTransitionTime":"2026-02-18T00:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.764403 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.764472 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.764490 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.764555 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.764578 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:53Z","lastTransitionTime":"2026-02-18T00:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.867147 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.867224 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.867247 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.867274 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.867292 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:53Z","lastTransitionTime":"2026-02-18T00:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.970451 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.970566 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.970585 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.970613 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:53 crc kubenswrapper[4858]: I0218 00:35:53.970631 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:53Z","lastTransitionTime":"2026-02-18T00:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.073263 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.073342 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.073360 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.073414 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.073433 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:54Z","lastTransitionTime":"2026-02-18T00:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.176895 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.176976 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.176993 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.177023 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.177041 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:54Z","lastTransitionTime":"2026-02-18T00:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.280309 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.280437 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.280468 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.280538 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.280557 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:54Z","lastTransitionTime":"2026-02-18T00:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.384009 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.384083 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.384107 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.384132 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.384149 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:54Z","lastTransitionTime":"2026-02-18T00:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.429539 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 15:29:06.228397247 +0000 UTC Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.487092 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.487156 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.487201 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.487227 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.487241 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:54Z","lastTransitionTime":"2026-02-18T00:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.591184 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.591279 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.591292 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.591314 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.591388 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:54Z","lastTransitionTime":"2026-02-18T00:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.695562 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.695643 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.695666 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.695692 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.695709 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:54Z","lastTransitionTime":"2026-02-18T00:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.798171 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.798234 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.798252 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.798278 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.798297 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:54Z","lastTransitionTime":"2026-02-18T00:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.901847 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.901923 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.901947 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.901981 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:54 crc kubenswrapper[4858]: I0218 00:35:54.902000 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:54Z","lastTransitionTime":"2026-02-18T00:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.005112 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.005181 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.005201 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.005225 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.005242 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:55Z","lastTransitionTime":"2026-02-18T00:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.108392 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.108448 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.108466 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.108490 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.108543 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:55Z","lastTransitionTime":"2026-02-18T00:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.211040 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.211106 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.211120 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.211143 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.211158 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:55Z","lastTransitionTime":"2026-02-18T00:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.313916 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.313995 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.314015 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.314044 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.314068 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:55Z","lastTransitionTime":"2026-02-18T00:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.417774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.417851 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.417876 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.417902 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.417920 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:55Z","lastTransitionTime":"2026-02-18T00:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.418781 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.418829 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.418829 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.418936 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:55 crc kubenswrapper[4858]: E0218 00:35:55.419082 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:55 crc kubenswrapper[4858]: E0218 00:35:55.419256 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:55 crc kubenswrapper[4858]: E0218 00:35:55.419329 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:55 crc kubenswrapper[4858]: E0218 00:35:55.419420 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.430020 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 20:24:19.296458776 +0000 UTC Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.521038 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.521097 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.521110 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.521134 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.521151 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:55Z","lastTransitionTime":"2026-02-18T00:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.624767 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.624849 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.624873 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.624902 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.624924 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:55Z","lastTransitionTime":"2026-02-18T00:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.728325 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.728391 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.728414 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.728439 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.728455 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:55Z","lastTransitionTime":"2026-02-18T00:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.831809 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.831876 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.831893 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.831918 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.831938 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:55Z","lastTransitionTime":"2026-02-18T00:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.935186 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.935264 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.935283 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.935314 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:55 crc kubenswrapper[4858]: I0218 00:35:55.935347 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:55Z","lastTransitionTime":"2026-02-18T00:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.039032 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.039107 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.039125 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.039153 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.039173 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:56Z","lastTransitionTime":"2026-02-18T00:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.148456 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.148758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.148816 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.148847 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.148867 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:56Z","lastTransitionTime":"2026-02-18T00:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.252893 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.252939 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.252955 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.252977 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.252993 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:56Z","lastTransitionTime":"2026-02-18T00:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.356514 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.356576 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.356593 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.356617 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.356638 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:56Z","lastTransitionTime":"2026-02-18T00:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.431178 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 06:21:50.661956642 +0000 UTC Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.459866 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.459934 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.459956 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.459990 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.460015 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:56Z","lastTransitionTime":"2026-02-18T00:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.563051 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.563128 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.563148 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.563175 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.563196 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:56Z","lastTransitionTime":"2026-02-18T00:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.666482 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.666580 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.666599 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.666625 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.666643 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:56Z","lastTransitionTime":"2026-02-18T00:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.770152 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.770222 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.770239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.770264 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.770281 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:56Z","lastTransitionTime":"2026-02-18T00:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.872843 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.872901 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.872917 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.872944 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.872961 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:56Z","lastTransitionTime":"2026-02-18T00:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.976829 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.976915 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.976992 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.977031 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:56 crc kubenswrapper[4858]: I0218 00:35:56.977059 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:56Z","lastTransitionTime":"2026-02-18T00:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.080314 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.080375 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.080393 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.080419 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.080439 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:57Z","lastTransitionTime":"2026-02-18T00:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.183928 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.184274 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.184292 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.184312 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.184325 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:57Z","lastTransitionTime":"2026-02-18T00:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.287260 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.287313 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.287326 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.287342 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.287357 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:57Z","lastTransitionTime":"2026-02-18T00:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.390401 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.390469 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.390490 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.390553 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.390575 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:57Z","lastTransitionTime":"2026-02-18T00:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.418777 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.418854 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.418803 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.418777 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:57 crc kubenswrapper[4858]: E0218 00:35:57.419068 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:57 crc kubenswrapper[4858]: E0218 00:35:57.419285 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:57 crc kubenswrapper[4858]: E0218 00:35:57.419378 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:57 crc kubenswrapper[4858]: E0218 00:35:57.419775 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.431565 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 20:27:36.529620754 +0000 UTC Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.493962 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.494024 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.494042 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.494066 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.494082 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:57Z","lastTransitionTime":"2026-02-18T00:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.497706 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=5.49767298 podStartE2EDuration="5.49767298s" podCreationTimestamp="2026-02-18 00:35:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:35:57.472986341 +0000 UTC m=+110.778823093" watchObservedRunningTime="2026-02-18 00:35:57.49767298 +0000 UTC m=+110.803509712" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.518003 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=86.517974184 podStartE2EDuration="1m26.517974184s" podCreationTimestamp="2026-02-18 00:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:35:57.517635295 +0000 UTC m=+110.823472027" watchObservedRunningTime="2026-02-18 00:35:57.517974184 +0000 UTC m=+110.823810916" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.518362 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=90.518356803 podStartE2EDuration="1m30.518356803s" podCreationTimestamp="2026-02-18 00:34:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:35:57.498293836 +0000 UTC m=+110.804130578" watchObservedRunningTime="2026-02-18 00:35:57.518356803 +0000 UTC m=+110.824193535" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.584763 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-jgxjq" podStartSLOduration=85.584730735 podStartE2EDuration="1m25.584730735s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:35:57.561384378 +0000 UTC m=+110.867221120" watchObservedRunningTime="2026-02-18 00:35:57.584730735 +0000 UTC m=+110.890567487" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.585169 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-n4pmf" podStartSLOduration=85.585163366 podStartE2EDuration="1m25.585163366s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:35:57.584460198 +0000 UTC m=+110.890296940" watchObservedRunningTime="2026-02-18 00:35:57.585163366 +0000 UTC m=+110.891000108" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.597309 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.597363 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.597378 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.597397 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.597410 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:57Z","lastTransitionTime":"2026-02-18T00:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.649172 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-sr8bs" podStartSLOduration=85.649139499 podStartE2EDuration="1m25.649139499s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:35:57.631338527 +0000 UTC m=+110.937175269" watchObservedRunningTime="2026-02-18 00:35:57.649139499 +0000 UTC m=+110.954976241" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.649772 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podStartSLOduration=85.649763045 podStartE2EDuration="1m25.649763045s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:35:57.649064067 +0000 UTC m=+110.954900809" watchObservedRunningTime="2026-02-18 00:35:57.649763045 +0000 UTC m=+110.955599787" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.699995 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.700053 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.700071 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.700099 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.700122 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:57Z","lastTransitionTime":"2026-02-18T00:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.706922 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=56.706895863 podStartE2EDuration="56.706895863s" podCreationTimestamp="2026-02-18 00:35:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:35:57.706725738 +0000 UTC m=+111.012562510" watchObservedRunningTime="2026-02-18 00:35:57.706895863 +0000 UTC m=+111.012732625" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.734862 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-v2whc" podStartSLOduration=85.734833141 podStartE2EDuration="1m25.734833141s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:35:57.734211236 +0000 UTC m=+111.040047998" watchObservedRunningTime="2026-02-18 00:35:57.734833141 +0000 UTC m=+111.040669913" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.750202 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gnnml" podStartSLOduration=84.750182274 podStartE2EDuration="1m24.750182274s" podCreationTimestamp="2026-02-18 00:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:35:57.749565169 +0000 UTC m=+111.055401941" watchObservedRunningTime="2026-02-18 00:35:57.750182274 +0000 UTC m=+111.056019036" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.788777 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=31.78874744 podStartE2EDuration="31.78874744s" podCreationTimestamp="2026-02-18 00:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:35:57.769171285 +0000 UTC m=+111.075008057" watchObservedRunningTime="2026-02-18 00:35:57.78874744 +0000 UTC m=+111.094584222" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.803210 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.803278 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.803296 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.803320 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.803340 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:57Z","lastTransitionTime":"2026-02-18T00:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.906160 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.906213 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.906232 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.906256 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:57 crc kubenswrapper[4858]: I0218 00:35:57.906274 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:57Z","lastTransitionTime":"2026-02-18T00:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.009600 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.009674 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.009699 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.009726 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.009758 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:58Z","lastTransitionTime":"2026-02-18T00:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.112456 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.112579 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.112607 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.112639 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.112660 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:58Z","lastTransitionTime":"2026-02-18T00:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.230720 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.230776 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.230797 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.230824 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.230844 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:58Z","lastTransitionTime":"2026-02-18T00:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.333314 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.333365 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.333375 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.333392 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.333403 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:58Z","lastTransitionTime":"2026-02-18T00:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.432693 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 02:58:07.566114947 +0000 UTC Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.436271 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.436353 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.436367 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.436384 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.436396 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:58Z","lastTransitionTime":"2026-02-18T00:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.539084 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.539165 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.539184 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.539216 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.539239 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:58Z","lastTransitionTime":"2026-02-18T00:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.641598 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.641642 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.641654 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.641671 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.641687 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:58Z","lastTransitionTime":"2026-02-18T00:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.744422 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.744548 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.744577 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.744606 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.744632 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:58Z","lastTransitionTime":"2026-02-18T00:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.847623 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.847670 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.847683 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.847705 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.847719 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:58Z","lastTransitionTime":"2026-02-18T00:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.951554 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.951694 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.951717 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.951740 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:58 crc kubenswrapper[4858]: I0218 00:35:58.951756 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:58Z","lastTransitionTime":"2026-02-18T00:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.054475 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.054557 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.054571 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.054587 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.054600 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:59Z","lastTransitionTime":"2026-02-18T00:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.157360 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.157422 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.157440 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.157470 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.157487 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:59Z","lastTransitionTime":"2026-02-18T00:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.260705 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.260772 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.260790 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.260818 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.260844 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:59Z","lastTransitionTime":"2026-02-18T00:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.363919 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.363995 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.364017 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.364045 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.364066 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:59Z","lastTransitionTime":"2026-02-18T00:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.419429 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.419480 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.419557 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:35:59 crc kubenswrapper[4858]: E0218 00:35:59.419662 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.420775 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:35:59 crc kubenswrapper[4858]: E0218 00:35:59.421010 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:35:59 crc kubenswrapper[4858]: E0218 00:35:59.421239 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:35:59 crc kubenswrapper[4858]: E0218 00:35:59.421628 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.433185 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 21:09:06.25820218 +0000 UTC Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.467430 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.467592 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.467663 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.467692 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.467746 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:59Z","lastTransitionTime":"2026-02-18T00:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.570724 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.570787 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.570804 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.570829 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.570849 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:59Z","lastTransitionTime":"2026-02-18T00:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.673098 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.673143 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.673152 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.673169 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.673182 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:59Z","lastTransitionTime":"2026-02-18T00:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.704172 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.704238 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.704259 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.704283 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.704300 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:35:59Z","lastTransitionTime":"2026-02-18T00:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.755823 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-rc2w7"] Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.756309 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rc2w7" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.757636 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.758037 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.758065 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.758593 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.852739 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25ccc633-eec1-4116-b36b-6f144b6becff-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rc2w7\" (UID: \"25ccc633-eec1-4116-b36b-6f144b6becff\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rc2w7" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.853169 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25ccc633-eec1-4116-b36b-6f144b6becff-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rc2w7\" (UID: \"25ccc633-eec1-4116-b36b-6f144b6becff\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rc2w7" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.853244 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/25ccc633-eec1-4116-b36b-6f144b6becff-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rc2w7\" (UID: \"25ccc633-eec1-4116-b36b-6f144b6becff\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rc2w7" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.853279 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25ccc633-eec1-4116-b36b-6f144b6becff-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rc2w7\" (UID: \"25ccc633-eec1-4116-b36b-6f144b6becff\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rc2w7" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.853339 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/25ccc633-eec1-4116-b36b-6f144b6becff-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rc2w7\" (UID: \"25ccc633-eec1-4116-b36b-6f144b6becff\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rc2w7" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.954094 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25ccc633-eec1-4116-b36b-6f144b6becff-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rc2w7\" (UID: \"25ccc633-eec1-4116-b36b-6f144b6becff\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rc2w7" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.954162 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25ccc633-eec1-4116-b36b-6f144b6becff-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rc2w7\" (UID: \"25ccc633-eec1-4116-b36b-6f144b6becff\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rc2w7" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.954236 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/25ccc633-eec1-4116-b36b-6f144b6becff-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rc2w7\" (UID: \"25ccc633-eec1-4116-b36b-6f144b6becff\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rc2w7" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.954271 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25ccc633-eec1-4116-b36b-6f144b6becff-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rc2w7\" (UID: \"25ccc633-eec1-4116-b36b-6f144b6becff\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rc2w7" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.954308 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/25ccc633-eec1-4116-b36b-6f144b6becff-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rc2w7\" (UID: \"25ccc633-eec1-4116-b36b-6f144b6becff\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rc2w7" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.954391 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/25ccc633-eec1-4116-b36b-6f144b6becff-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rc2w7\" (UID: \"25ccc633-eec1-4116-b36b-6f144b6becff\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rc2w7" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.954552 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/25ccc633-eec1-4116-b36b-6f144b6becff-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rc2w7\" (UID: \"25ccc633-eec1-4116-b36b-6f144b6becff\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rc2w7" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.956395 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25ccc633-eec1-4116-b36b-6f144b6becff-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rc2w7\" (UID: \"25ccc633-eec1-4116-b36b-6f144b6becff\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rc2w7" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.963850 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25ccc633-eec1-4116-b36b-6f144b6becff-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rc2w7\" (UID: \"25ccc633-eec1-4116-b36b-6f144b6becff\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rc2w7" Feb 18 00:35:59 crc kubenswrapper[4858]: I0218 00:35:59.988221 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25ccc633-eec1-4116-b36b-6f144b6becff-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rc2w7\" (UID: \"25ccc633-eec1-4116-b36b-6f144b6becff\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rc2w7" Feb 18 00:36:00 crc kubenswrapper[4858]: I0218 00:36:00.075903 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rc2w7" Feb 18 00:36:00 crc kubenswrapper[4858]: I0218 00:36:00.433597 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:12:07.70828615 +0000 UTC Feb 18 00:36:00 crc kubenswrapper[4858]: I0218 00:36:00.433666 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 18 00:36:00 crc kubenswrapper[4858]: I0218 00:36:00.443423 4858 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 00:36:01 crc kubenswrapper[4858]: I0218 00:36:01.098697 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rc2w7" event={"ID":"25ccc633-eec1-4116-b36b-6f144b6becff","Type":"ContainerStarted","Data":"5f57cd90f9c4bb48932c79908ef9889110ba5fb9bcc66cbd0906d065201c0c4a"} Feb 18 00:36:01 crc kubenswrapper[4858]: I0218 00:36:01.098763 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rc2w7" event={"ID":"25ccc633-eec1-4116-b36b-6f144b6becff","Type":"ContainerStarted","Data":"7d6f54af0ff96c053963ebfd88e60fa1fbead37fd20a2cc781f7b6ecb4024026"} Feb 18 00:36:01 crc kubenswrapper[4858]: I0218 00:36:01.122995 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rc2w7" podStartSLOduration=89.122980457 podStartE2EDuration="1m29.122980457s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:01.120709553 +0000 UTC m=+114.426546285" watchObservedRunningTime="2026-02-18 00:36:01.122980457 +0000 UTC m=+114.428817189" Feb 18 00:36:01 crc kubenswrapper[4858]: I0218 00:36:01.419464 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:36:01 crc kubenswrapper[4858]: I0218 00:36:01.419479 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:36:01 crc kubenswrapper[4858]: I0218 00:36:01.419551 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:36:01 crc kubenswrapper[4858]: E0218 00:36:01.419751 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:36:01 crc kubenswrapper[4858]: E0218 00:36:01.419863 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:36:01 crc kubenswrapper[4858]: I0218 00:36:01.419953 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:36:01 crc kubenswrapper[4858]: E0218 00:36:01.420022 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:36:01 crc kubenswrapper[4858]: E0218 00:36:01.420166 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:36:02 crc kubenswrapper[4858]: I0218 00:36:02.420366 4858 scope.go:117] "RemoveContainer" containerID="654a3b3d31b37d872da214d23ebf83bf4ec4272b8ef12cd793af82bea158ce78" Feb 18 00:36:02 crc kubenswrapper[4858]: E0218 00:36:02.420659 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jjq7k_openshift-ovn-kubernetes(62c71780-47e7-4e14-9b93-60050f6f3141)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" Feb 18 00:36:03 crc kubenswrapper[4858]: I0218 00:36:03.419192 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:36:03 crc kubenswrapper[4858]: I0218 00:36:03.419225 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:36:03 crc kubenswrapper[4858]: I0218 00:36:03.419250 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:36:03 crc kubenswrapper[4858]: E0218 00:36:03.419986 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:36:03 crc kubenswrapper[4858]: E0218 00:36:03.419821 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:36:03 crc kubenswrapper[4858]: I0218 00:36:03.419366 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:36:03 crc kubenswrapper[4858]: E0218 00:36:03.420096 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:36:03 crc kubenswrapper[4858]: E0218 00:36:03.420221 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:36:05 crc kubenswrapper[4858]: I0218 00:36:05.418810 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:36:05 crc kubenswrapper[4858]: I0218 00:36:05.419884 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:36:05 crc kubenswrapper[4858]: I0218 00:36:05.419922 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:36:05 crc kubenswrapper[4858]: E0218 00:36:05.419871 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:36:05 crc kubenswrapper[4858]: E0218 00:36:05.420018 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:36:05 crc kubenswrapper[4858]: I0218 00:36:05.420074 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:36:05 crc kubenswrapper[4858]: E0218 00:36:05.420104 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:36:05 crc kubenswrapper[4858]: E0218 00:36:05.420324 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:36:07 crc kubenswrapper[4858]: I0218 00:36:07.121157 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sr8bs_631d8e25-82dd-4462-b98d-f076e7264b67/kube-multus/1.log" Feb 18 00:36:07 crc kubenswrapper[4858]: I0218 00:36:07.121869 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sr8bs_631d8e25-82dd-4462-b98d-f076e7264b67/kube-multus/0.log" Feb 18 00:36:07 crc kubenswrapper[4858]: I0218 00:36:07.121941 4858 generic.go:334] "Generic (PLEG): container finished" podID="631d8e25-82dd-4462-b98d-f076e7264b67" containerID="1c99f89af61fd7a5275a9556f567446675b39146de6e8b14b5ec7c475c26a413" exitCode=1 Feb 18 00:36:07 crc kubenswrapper[4858]: I0218 00:36:07.121988 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sr8bs" event={"ID":"631d8e25-82dd-4462-b98d-f076e7264b67","Type":"ContainerDied","Data":"1c99f89af61fd7a5275a9556f567446675b39146de6e8b14b5ec7c475c26a413"} Feb 18 00:36:07 crc kubenswrapper[4858]: I0218 00:36:07.122045 4858 scope.go:117] "RemoveContainer" containerID="fa11380889bc0f0917918faee28978870f2b436671372c1e3e2946349925bdf4" Feb 18 00:36:07 crc kubenswrapper[4858]: I0218 00:36:07.122746 4858 scope.go:117] "RemoveContainer" containerID="1c99f89af61fd7a5275a9556f567446675b39146de6e8b14b5ec7c475c26a413" Feb 18 00:36:07 crc kubenswrapper[4858]: E0218 00:36:07.123121 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-sr8bs_openshift-multus(631d8e25-82dd-4462-b98d-f076e7264b67)\"" pod="openshift-multus/multus-sr8bs" podUID="631d8e25-82dd-4462-b98d-f076e7264b67" Feb 18 00:36:07 crc kubenswrapper[4858]: I0218 00:36:07.419348 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:36:07 crc kubenswrapper[4858]: I0218 00:36:07.419428 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:36:07 crc kubenswrapper[4858]: E0218 00:36:07.420168 4858 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 18 00:36:07 crc kubenswrapper[4858]: E0218 00:36:07.421346 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:36:07 crc kubenswrapper[4858]: I0218 00:36:07.421374 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:36:07 crc kubenswrapper[4858]: E0218 00:36:07.421471 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:36:07 crc kubenswrapper[4858]: E0218 00:36:07.421590 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:36:07 crc kubenswrapper[4858]: I0218 00:36:07.421800 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:36:07 crc kubenswrapper[4858]: E0218 00:36:07.422063 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:36:07 crc kubenswrapper[4858]: E0218 00:36:07.529168 4858 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:36:08 crc kubenswrapper[4858]: I0218 00:36:08.128357 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sr8bs_631d8e25-82dd-4462-b98d-f076e7264b67/kube-multus/1.log" Feb 18 00:36:09 crc kubenswrapper[4858]: I0218 00:36:09.418682 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:36:09 crc kubenswrapper[4858]: I0218 00:36:09.418817 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:36:09 crc kubenswrapper[4858]: I0218 00:36:09.418648 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:36:09 crc kubenswrapper[4858]: E0218 00:36:09.419018 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:36:09 crc kubenswrapper[4858]: I0218 00:36:09.418784 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:36:09 crc kubenswrapper[4858]: E0218 00:36:09.419159 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:36:09 crc kubenswrapper[4858]: E0218 00:36:09.419275 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:36:09 crc kubenswrapper[4858]: E0218 00:36:09.419450 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:36:11 crc kubenswrapper[4858]: I0218 00:36:11.418941 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:36:11 crc kubenswrapper[4858]: E0218 00:36:11.419152 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:36:11 crc kubenswrapper[4858]: I0218 00:36:11.419571 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:36:11 crc kubenswrapper[4858]: E0218 00:36:11.419734 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:36:11 crc kubenswrapper[4858]: I0218 00:36:11.419868 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:36:11 crc kubenswrapper[4858]: E0218 00:36:11.420094 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:36:11 crc kubenswrapper[4858]: I0218 00:36:11.420136 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:36:11 crc kubenswrapper[4858]: E0218 00:36:11.420269 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:36:12 crc kubenswrapper[4858]: E0218 00:36:12.530725 4858 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:36:13 crc kubenswrapper[4858]: I0218 00:36:13.419112 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:36:13 crc kubenswrapper[4858]: I0218 00:36:13.419192 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:36:13 crc kubenswrapper[4858]: I0218 00:36:13.419258 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:36:13 crc kubenswrapper[4858]: E0218 00:36:13.419287 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:36:13 crc kubenswrapper[4858]: I0218 00:36:13.419348 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:36:13 crc kubenswrapper[4858]: E0218 00:36:13.419564 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:36:13 crc kubenswrapper[4858]: E0218 00:36:13.419711 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:36:13 crc kubenswrapper[4858]: E0218 00:36:13.419833 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:36:13 crc kubenswrapper[4858]: I0218 00:36:13.420839 4858 scope.go:117] "RemoveContainer" containerID="654a3b3d31b37d872da214d23ebf83bf4ec4272b8ef12cd793af82bea158ce78" Feb 18 00:36:14 crc kubenswrapper[4858]: I0218 00:36:14.156557 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jjq7k_62c71780-47e7-4e14-9b93-60050f6f3141/ovnkube-controller/3.log" Feb 18 00:36:14 crc kubenswrapper[4858]: I0218 00:36:14.160848 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerStarted","Data":"19e70fa0770c17c46684d5759f3196c3d8f2f2c334f3870ed602967094fb84e1"} Feb 18 00:36:14 crc kubenswrapper[4858]: I0218 00:36:14.162023 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:36:14 crc kubenswrapper[4858]: I0218 00:36:14.201701 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" podStartSLOduration=102.201664674 podStartE2EDuration="1m42.201664674s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:14.196109659 +0000 UTC m=+127.501946411" watchObservedRunningTime="2026-02-18 00:36:14.201664674 +0000 UTC m=+127.507501446" Feb 18 00:36:14 crc kubenswrapper[4858]: I0218 00:36:14.403400 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jbdlz"] Feb 18 00:36:14 crc kubenswrapper[4858]: I0218 00:36:14.403613 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:36:14 crc kubenswrapper[4858]: E0218 00:36:14.403778 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:36:15 crc kubenswrapper[4858]: I0218 00:36:15.419297 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:36:15 crc kubenswrapper[4858]: E0218 00:36:15.419461 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:36:15 crc kubenswrapper[4858]: I0218 00:36:15.419789 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:36:15 crc kubenswrapper[4858]: I0218 00:36:15.419835 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:36:15 crc kubenswrapper[4858]: E0218 00:36:15.419932 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:36:15 crc kubenswrapper[4858]: E0218 00:36:15.420188 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:36:16 crc kubenswrapper[4858]: I0218 00:36:16.419426 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:36:16 crc kubenswrapper[4858]: E0218 00:36:16.419689 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:36:17 crc kubenswrapper[4858]: I0218 00:36:17.418721 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:36:17 crc kubenswrapper[4858]: I0218 00:36:17.418857 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:36:17 crc kubenswrapper[4858]: E0218 00:36:17.420574 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:36:17 crc kubenswrapper[4858]: I0218 00:36:17.420598 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:36:17 crc kubenswrapper[4858]: E0218 00:36:17.421479 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:36:17 crc kubenswrapper[4858]: E0218 00:36:17.421651 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:36:17 crc kubenswrapper[4858]: E0218 00:36:17.531603 4858 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:36:18 crc kubenswrapper[4858]: I0218 00:36:18.419274 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:36:18 crc kubenswrapper[4858]: E0218 00:36:18.419472 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:36:19 crc kubenswrapper[4858]: I0218 00:36:19.419053 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:36:19 crc kubenswrapper[4858]: I0218 00:36:19.419176 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:36:19 crc kubenswrapper[4858]: I0218 00:36:19.419280 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:36:19 crc kubenswrapper[4858]: E0218 00:36:19.419282 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:36:19 crc kubenswrapper[4858]: E0218 00:36:19.419417 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:36:19 crc kubenswrapper[4858]: E0218 00:36:19.419726 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:36:20 crc kubenswrapper[4858]: I0218 00:36:20.419221 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:36:20 crc kubenswrapper[4858]: E0218 00:36:20.419420 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:36:21 crc kubenswrapper[4858]: I0218 00:36:21.419454 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:36:21 crc kubenswrapper[4858]: I0218 00:36:21.419541 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:36:21 crc kubenswrapper[4858]: I0218 00:36:21.419580 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:36:21 crc kubenswrapper[4858]: E0218 00:36:21.419723 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:36:21 crc kubenswrapper[4858]: E0218 00:36:21.419938 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:36:21 crc kubenswrapper[4858]: I0218 00:36:21.420634 4858 scope.go:117] "RemoveContainer" containerID="1c99f89af61fd7a5275a9556f567446675b39146de6e8b14b5ec7c475c26a413" Feb 18 00:36:21 crc kubenswrapper[4858]: E0218 00:36:21.420883 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:36:22 crc kubenswrapper[4858]: I0218 00:36:22.198639 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sr8bs_631d8e25-82dd-4462-b98d-f076e7264b67/kube-multus/1.log" Feb 18 00:36:22 crc kubenswrapper[4858]: I0218 00:36:22.198981 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sr8bs" event={"ID":"631d8e25-82dd-4462-b98d-f076e7264b67","Type":"ContainerStarted","Data":"6df787764e784c8ac5e384a5692545df23cffb81f4ccfb4027bcea91c5242b7e"} Feb 18 00:36:22 crc kubenswrapper[4858]: I0218 00:36:22.418690 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:36:22 crc kubenswrapper[4858]: E0218 00:36:22.418895 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:36:22 crc kubenswrapper[4858]: E0218 00:36:22.533419 4858 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:36:23 crc kubenswrapper[4858]: I0218 00:36:23.418539 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:36:23 crc kubenswrapper[4858]: I0218 00:36:23.418588 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:36:23 crc kubenswrapper[4858]: E0218 00:36:23.418798 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:36:23 crc kubenswrapper[4858]: I0218 00:36:23.418837 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:36:23 crc kubenswrapper[4858]: E0218 00:36:23.418965 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:36:23 crc kubenswrapper[4858]: E0218 00:36:23.419149 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:36:24 crc kubenswrapper[4858]: I0218 00:36:24.418669 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:36:24 crc kubenswrapper[4858]: E0218 00:36:24.418894 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:36:25 crc kubenswrapper[4858]: I0218 00:36:25.419355 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:36:25 crc kubenswrapper[4858]: I0218 00:36:25.419543 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:36:25 crc kubenswrapper[4858]: I0218 00:36:25.419651 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:36:25 crc kubenswrapper[4858]: E0218 00:36:25.419891 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:36:25 crc kubenswrapper[4858]: E0218 00:36:25.420046 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:36:25 crc kubenswrapper[4858]: E0218 00:36:25.420184 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:36:26 crc kubenswrapper[4858]: I0218 00:36:26.419042 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:36:26 crc kubenswrapper[4858]: E0218 00:36:26.419260 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbdlz" podUID="7064635a-c927-4499-98ce-76833fb5801c" Feb 18 00:36:27 crc kubenswrapper[4858]: I0218 00:36:27.418575 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:36:27 crc kubenswrapper[4858]: I0218 00:36:27.418584 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:36:27 crc kubenswrapper[4858]: E0218 00:36:27.420473 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:36:27 crc kubenswrapper[4858]: I0218 00:36:27.420577 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:36:27 crc kubenswrapper[4858]: E0218 00:36:27.420730 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:36:27 crc kubenswrapper[4858]: E0218 00:36:27.420889 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:36:28 crc kubenswrapper[4858]: I0218 00:36:28.418275 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:36:28 crc kubenswrapper[4858]: I0218 00:36:28.421471 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 00:36:28 crc kubenswrapper[4858]: I0218 00:36:28.423100 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 00:36:29 crc kubenswrapper[4858]: I0218 00:36:29.418843 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:36:29 crc kubenswrapper[4858]: I0218 00:36:29.419277 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:36:29 crc kubenswrapper[4858]: I0218 00:36:29.419618 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:36:29 crc kubenswrapper[4858]: I0218 00:36:29.422747 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 00:36:29 crc kubenswrapper[4858]: I0218 00:36:29.423050 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 00:36:29 crc kubenswrapper[4858]: I0218 00:36:29.423073 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 00:36:29 crc kubenswrapper[4858]: I0218 00:36:29.423127 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.457899 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.516071 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.516948 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.520417 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-s76q5"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.521150 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-s76q5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.522002 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.523040 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.524680 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-lpg4n"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.525450 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.527996 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.528299 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.530567 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7nlk"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.531082 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7nlk" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.533639 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.533916 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.534071 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.536430 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ztstf"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.536720 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.541362 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-27k9h"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.541768 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-27k9h" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.546592 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4l24x"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.547226 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vzdrt"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.547862 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vzdrt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.549250 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4l24x" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.557585 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mkz5l"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.558391 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mkz5l" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.579124 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-kqbdg"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.594263 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.594770 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.594961 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.595110 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.595192 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.595292 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.595392 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.595486 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.595628 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.595698 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.595763 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.595838 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.596467 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.596608 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.596777 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.596919 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.596974 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.597341 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.597403 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.597553 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.597684 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.597781 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.597822 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.597950 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.597983 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.597986 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6269b5c1-c01e-4f81-8c44-94455a9cc858-images\") pod \"machine-api-operator-5694c8668f-27k9h\" (UID: \"6269b5c1-c01e-4f81-8c44-94455a9cc858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-27k9h" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.598012 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.598018 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz259\" (UniqueName: \"kubernetes.io/projected/1963f275-b68a-4539-9371-8a38bafa03eb-kube-api-access-gz259\") pod \"openshift-apiserver-operator-796bbdcf4f-c7nlk\" (UID: \"1963f275-b68a-4539-9371-8a38bafa03eb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7nlk" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.598051 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.598118 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.598130 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.598046 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dac5a4df-1236-446c-91c4-1521fb88d2f4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4l24x\" (UID: \"dac5a4df-1236-446c-91c4-1521fb88d2f4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4l24x" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.598209 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-client-ca\") pod \"controller-manager-879f6c89f-ztstf\" (UID: \"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.598231 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383-config\") pod \"route-controller-manager-6576b87f9c-jx5ng\" (UID: \"ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.598250 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.598269 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9fcbc6ed-02b2-4c98-b9c6-cb4e70a0fc6b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vzdrt\" (UID: \"9fcbc6ed-02b2-4c98-b9c6-cb4e70a0fc6b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vzdrt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.598291 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a82bb6ce-4801-417a-a4e2-93d1667999ee-oauth-serving-cert\") pod \"console-f9d7485db-lpg4n\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.598312 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6269b5c1-c01e-4f81-8c44-94455a9cc858-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-27k9h\" (UID: \"6269b5c1-c01e-4f81-8c44-94455a9cc858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-27k9h" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.598330 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1963f275-b68a-4539-9371-8a38bafa03eb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-c7nlk\" (UID: \"1963f275-b68a-4539-9371-8a38bafa03eb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7nlk" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.598346 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383-serving-cert\") pod \"route-controller-manager-6576b87f9c-jx5ng\" (UID: \"ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.598260 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.598367 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a82bb6ce-4801-417a-a4e2-93d1667999ee-console-config\") pod \"console-f9d7485db-lpg4n\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.598385 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a82bb6ce-4801-417a-a4e2-93d1667999ee-console-serving-cert\") pod \"console-f9d7485db-lpg4n\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.598404 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1963f275-b68a-4539-9371-8a38bafa03eb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-c7nlk\" (UID: \"1963f275-b68a-4539-9371-8a38bafa03eb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7nlk" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.598409 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.598429 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkrps\" (UniqueName: \"kubernetes.io/projected/a82bb6ce-4801-417a-a4e2-93d1667999ee-kube-api-access-fkrps\") pod \"console-f9d7485db-lpg4n\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.598445 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ztstf\" (UID: \"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.598462 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c607470-6245-49d5-9509-009900f0adef-config\") pod \"authentication-operator-69f744f599-s76q5\" (UID: \"0c607470-6245-49d5-9509-009900f0adef\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s76q5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.598475 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c607470-6245-49d5-9509-009900f0adef-serving-cert\") pod \"authentication-operator-69f744f599-s76q5\" (UID: \"0c607470-6245-49d5-9509-009900f0adef\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s76q5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.599553 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl5vk\" (UniqueName: \"kubernetes.io/projected/0c607470-6245-49d5-9509-009900f0adef-kube-api-access-sl5vk\") pod \"authentication-operator-69f744f599-s76q5\" (UID: \"0c607470-6245-49d5-9509-009900f0adef\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s76q5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.599613 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-config\") pod \"controller-manager-879f6c89f-ztstf\" (UID: \"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.599632 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcnh9\" (UniqueName: \"kubernetes.io/projected/dac5a4df-1236-446c-91c4-1521fb88d2f4-kube-api-access-bcnh9\") pod \"cluster-image-registry-operator-dc59b4c8b-4l24x\" (UID: \"dac5a4df-1236-446c-91c4-1521fb88d2f4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4l24x" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.599654 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/dac5a4df-1236-446c-91c4-1521fb88d2f4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4l24x\" (UID: \"dac5a4df-1236-446c-91c4-1521fb88d2f4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4l24x" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.599669 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6269b5c1-c01e-4f81-8c44-94455a9cc858-config\") pod \"machine-api-operator-5694c8668f-27k9h\" (UID: \"6269b5c1-c01e-4f81-8c44-94455a9cc858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-27k9h" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.599686 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383-client-ca\") pod \"route-controller-manager-6576b87f9c-jx5ng\" (UID: \"ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.599711 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dac5a4df-1236-446c-91c4-1521fb88d2f4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4l24x\" (UID: \"dac5a4df-1236-446c-91c4-1521fb88d2f4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4l24x" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.599727 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a82bb6ce-4801-417a-a4e2-93d1667999ee-service-ca\") pod \"console-f9d7485db-lpg4n\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.599742 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn29m\" (UniqueName: \"kubernetes.io/projected/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-kube-api-access-jn29m\") pod \"controller-manager-879f6c89f-ztstf\" (UID: \"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.599771 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4gqk\" (UniqueName: \"kubernetes.io/projected/9fcbc6ed-02b2-4c98-b9c6-cb4e70a0fc6b-kube-api-access-g4gqk\") pod \"cluster-samples-operator-665b6dd947-vzdrt\" (UID: \"9fcbc6ed-02b2-4c98-b9c6-cb4e70a0fc6b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vzdrt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.599787 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4j94\" (UniqueName: \"kubernetes.io/projected/6269b5c1-c01e-4f81-8c44-94455a9cc858-kube-api-access-n4j94\") pod \"machine-api-operator-5694c8668f-27k9h\" (UID: \"6269b5c1-c01e-4f81-8c44-94455a9cc858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-27k9h" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.599802 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a82bb6ce-4801-417a-a4e2-93d1667999ee-console-oauth-config\") pod \"console-f9d7485db-lpg4n\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.599824 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c607470-6245-49d5-9509-009900f0adef-service-ca-bundle\") pod \"authentication-operator-69f744f599-s76q5\" (UID: \"0c607470-6245-49d5-9509-009900f0adef\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s76q5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.599839 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-serving-cert\") pod \"controller-manager-879f6c89f-ztstf\" (UID: \"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.599858 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c607470-6245-49d5-9509-009900f0adef-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-s76q5\" (UID: \"0c607470-6245-49d5-9509-009900f0adef\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s76q5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.599883 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a82bb6ce-4801-417a-a4e2-93d1667999ee-trusted-ca-bundle\") pod \"console-f9d7485db-lpg4n\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.599899 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwhxq\" (UniqueName: \"kubernetes.io/projected/ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383-kube-api-access-fwhxq\") pod \"route-controller-manager-6576b87f9c-jx5ng\" (UID: \"ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.600119 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.600449 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.600871 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.601319 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.602536 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.602837 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.603368 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gtw2k"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.603825 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zk6hd"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.604207 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.604799 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-574cj"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.605169 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-6djsl"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.605694 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6djsl" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.606166 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-kqbdg" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.606585 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-574cj" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.606862 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.606898 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zk6hd" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.607276 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gtw2k" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.612296 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.615285 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-n879l"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.616025 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n879l" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.617018 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.619528 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cjd57"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.619946 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8drdx"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.620223 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8drdx" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.620439 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.622224 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29522880-n4bvp"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.623017 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29522880-n4bvp" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.624099 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.624649 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.624957 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.628276 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-kzbz5"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.629629 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.629630 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.630029 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.631785 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-c7bmt"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.632115 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-9jdl7"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.632392 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-9jdl7" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.632645 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kzbz5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.632786 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c7bmt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.632798 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wnh9x"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.633242 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.633244 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wnh9x" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.633793 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.640372 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.640765 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.643526 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-54bhc"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.647420 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-54bhc" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.648454 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.674208 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.675929 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.676117 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.676281 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.677046 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.677226 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.677418 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.674586 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.677947 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bc7mz"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.678357 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.678481 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.678537 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wmw2"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.678623 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.678691 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.678803 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.678843 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wmw2" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.678930 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.675113 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.685108 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.685515 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.686376 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qf4s8"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.687482 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.691280 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.691478 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.691995 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.692188 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.692287 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.692391 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.692644 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.692741 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.692822 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.691506 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.692962 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.693037 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.693130 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.693225 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.693310 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.694377 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.694528 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-s76q5"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.694545 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.694552 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.694683 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.694718 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.694826 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.694929 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.694829 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.695070 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.695157 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.695270 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.695386 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.695472 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.699732 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p8987"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.700193 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.700393 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8987" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.700810 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/dac5a4df-1236-446c-91c4-1521fb88d2f4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4l24x\" (UID: \"dac5a4df-1236-446c-91c4-1521fb88d2f4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4l24x" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.700923 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6269b5c1-c01e-4f81-8c44-94455a9cc858-config\") pod \"machine-api-operator-5694c8668f-27k9h\" (UID: \"6269b5c1-c01e-4f81-8c44-94455a9cc858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-27k9h" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.701037 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383-client-ca\") pod \"route-controller-manager-6576b87f9c-jx5ng\" (UID: \"ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.701146 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12b8a5f7-869c-4343-8224-ae76d73073cf-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8drdx\" (UID: \"12b8a5f7-869c-4343-8224-ae76d73073cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8drdx" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.701239 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.701254 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.701418 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/896376a1-7809-4597-a315-2089547c2f89-auth-proxy-config\") pod \"machine-approver-56656f9798-kzbz5\" (UID: \"896376a1-7809-4597-a315-2089547c2f89\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kzbz5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.701592 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dac5a4df-1236-446c-91c4-1521fb88d2f4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4l24x\" (UID: \"dac5a4df-1236-446c-91c4-1521fb88d2f4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4l24x" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.701684 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a82bb6ce-4801-417a-a4e2-93d1667999ee-service-ca\") pod \"console-f9d7485db-lpg4n\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.701781 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn29m\" (UniqueName: \"kubernetes.io/projected/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-kube-api-access-jn29m\") pod \"controller-manager-879f6c89f-ztstf\" (UID: \"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.701868 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eef415f0-0fe2-4c5c-a528-3394ce644ff1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-54bhc\" (UID: \"eef415f0-0fe2-4c5c-a528-3394ce644ff1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-54bhc" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.701955 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a4447453-79a5-4008-89ec-add924803b82-encryption-config\") pod \"apiserver-7bbb656c7d-6rx4q\" (UID: \"a4447453-79a5-4008-89ec-add924803b82\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.702045 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92e95ff1-a825-4d17-825f-f4765353a5f2-audit-dir\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.702133 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.702210 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.702301 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9bn8\" (UniqueName: \"kubernetes.io/projected/b0bd9345-840f-40e8-946d-b646e19a6b39-kube-api-access-s9bn8\") pod \"migrator-59844c95c7-c7bmt\" (UID: \"b0bd9345-840f-40e8-946d-b646e19a6b39\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c7bmt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.702394 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4gqk\" (UniqueName: \"kubernetes.io/projected/9fcbc6ed-02b2-4c98-b9c6-cb4e70a0fc6b-kube-api-access-g4gqk\") pod \"cluster-samples-operator-665b6dd947-vzdrt\" (UID: \"9fcbc6ed-02b2-4c98-b9c6-cb4e70a0fc6b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vzdrt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.702482 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4j94\" (UniqueName: \"kubernetes.io/projected/6269b5c1-c01e-4f81-8c44-94455a9cc858-kube-api-access-n4j94\") pod \"machine-api-operator-5694c8668f-27k9h\" (UID: \"6269b5c1-c01e-4f81-8c44-94455a9cc858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-27k9h" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.702602 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a82bb6ce-4801-417a-a4e2-93d1667999ee-console-oauth-config\") pod \"console-f9d7485db-lpg4n\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.702688 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqpcl\" (UniqueName: \"kubernetes.io/projected/44c5ceae-0c80-4b01-a773-8c222c900f34-kube-api-access-zqpcl\") pod \"kube-storage-version-migrator-operator-b67b599dd-zk6hd\" (UID: \"44c5ceae-0c80-4b01-a773-8c222c900f34\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zk6hd" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.702780 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/646ba69d-8375-436b-a16f-e7bae5475ac6-config\") pod \"console-operator-58897d9998-gtw2k\" (UID: \"646ba69d-8375-436b-a16f-e7bae5475ac6\") " pod="openshift-console-operator/console-operator-58897d9998-gtw2k" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.702882 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c-etcd-service-ca\") pod \"etcd-operator-b45778765-9jdl7\" (UID: \"c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9jdl7" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.702973 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c607470-6245-49d5-9509-009900f0adef-service-ca-bundle\") pod \"authentication-operator-69f744f599-s76q5\" (UID: \"0c607470-6245-49d5-9509-009900f0adef\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s76q5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.703066 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-serving-cert\") pod \"controller-manager-879f6c89f-ztstf\" (UID: \"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.703157 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2798c53f-d277-411d-b95d-3439db650d71-metrics-tls\") pod \"ingress-operator-5b745b69d9-n879l\" (UID: \"2798c53f-d277-411d-b95d-3439db650d71\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n879l" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.703254 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.702363 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6269b5c1-c01e-4f81-8c44-94455a9cc858-config\") pod \"machine-api-operator-5694c8668f-27k9h\" (UID: \"6269b5c1-c01e-4f81-8c44-94455a9cc858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-27k9h" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.704238 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dac5a4df-1236-446c-91c4-1521fb88d2f4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4l24x\" (UID: \"dac5a4df-1236-446c-91c4-1521fb88d2f4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4l24x" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.710137 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c607470-6245-49d5-9509-009900f0adef-service-ca-bundle\") pod \"authentication-operator-69f744f599-s76q5\" (UID: \"0c607470-6245-49d5-9509-009900f0adef\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s76q5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.716476 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a82bb6ce-4801-417a-a4e2-93d1667999ee-service-ca\") pod \"console-f9d7485db-lpg4n\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.716673 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c607470-6245-49d5-9509-009900f0adef-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-s76q5\" (UID: \"0c607470-6245-49d5-9509-009900f0adef\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s76q5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.716724 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f95d7fd9-797f-464f-ac5e-e78c353e78ee-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wnh9x\" (UID: \"f95d7fd9-797f-464f-ac5e-e78c353e78ee\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wnh9x" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.716777 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.717382 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.717462 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvvr8\" (UniqueName: \"kubernetes.io/projected/849a4228-f4ae-4b7f-a2c8-5db413e4dd28-kube-api-access-qvvr8\") pod \"dns-operator-744455d44c-kqbdg\" (UID: \"849a4228-f4ae-4b7f-a2c8-5db413e4dd28\") " pod="openshift-dns-operator/dns-operator-744455d44c-kqbdg" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.717487 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xskds\" (UniqueName: \"kubernetes.io/projected/2798c53f-d277-411d-b95d-3439db650d71-kube-api-access-xskds\") pod \"ingress-operator-5b745b69d9-n879l\" (UID: \"2798c53f-d277-411d-b95d-3439db650d71\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n879l" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.717519 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a4447453-79a5-4008-89ec-add924803b82-etcd-client\") pod \"apiserver-7bbb656c7d-6rx4q\" (UID: \"a4447453-79a5-4008-89ec-add924803b82\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.717560 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4447453-79a5-4008-89ec-add924803b82-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6rx4q\" (UID: \"a4447453-79a5-4008-89ec-add924803b82\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.717604 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a82bb6ce-4801-417a-a4e2-93d1667999ee-trusted-ca-bundle\") pod \"console-f9d7485db-lpg4n\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.717669 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwhxq\" (UniqueName: \"kubernetes.io/projected/ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383-kube-api-access-fwhxq\") pod \"route-controller-manager-6576b87f9c-jx5ng\" (UID: \"ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.717687 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08ad236d-4644-4e49-b9d5-194b2746a760-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mkz5l\" (UID: \"08ad236d-4644-4e49-b9d5-194b2746a760\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mkz5l" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.717718 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz259\" (UniqueName: \"kubernetes.io/projected/1963f275-b68a-4539-9371-8a38bafa03eb-kube-api-access-gz259\") pod \"openshift-apiserver-operator-796bbdcf4f-c7nlk\" (UID: \"1963f275-b68a-4539-9371-8a38bafa03eb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7nlk" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.718744 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a82bb6ce-4801-417a-a4e2-93d1667999ee-trusted-ca-bundle\") pod \"console-f9d7485db-lpg4n\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.718792 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c607470-6245-49d5-9509-009900f0adef-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-s76q5\" (UID: \"0c607470-6245-49d5-9509-009900f0adef\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s76q5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.718910 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6269b5c1-c01e-4f81-8c44-94455a9cc858-images\") pod \"machine-api-operator-5694c8668f-27k9h\" (UID: \"6269b5c1-c01e-4f81-8c44-94455a9cc858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-27k9h" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.718948 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.718953 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.719067 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.719101 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5977586b-6538-4050-bfde-dde62e4d87cd-serviceca\") pod \"image-pruner-29522880-n4bvp\" (UID: \"5977586b-6538-4050-bfde-dde62e4d87cd\") " pod="openshift-image-registry/image-pruner-29522880-n4bvp" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.719155 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dac5a4df-1236-446c-91c4-1521fb88d2f4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4l24x\" (UID: \"dac5a4df-1236-446c-91c4-1521fb88d2f4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4l24x" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.719200 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92e95ff1-a825-4d17-825f-f4765353a5f2-audit-policies\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.719219 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/896376a1-7809-4597-a315-2089547c2f89-config\") pod \"machine-approver-56656f9798-kzbz5\" (UID: \"896376a1-7809-4597-a315-2089547c2f89\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kzbz5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.719288 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/dac5a4df-1236-446c-91c4-1521fb88d2f4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4l24x\" (UID: \"dac5a4df-1236-446c-91c4-1521fb88d2f4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4l24x" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.719339 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-client-ca\") pod \"controller-manager-879f6c89f-ztstf\" (UID: \"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.719379 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383-config\") pod \"route-controller-manager-6576b87f9c-jx5ng\" (UID: \"ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.719399 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a4447453-79a5-4008-89ec-add924803b82-audit-dir\") pod \"apiserver-7bbb656c7d-6rx4q\" (UID: \"a4447453-79a5-4008-89ec-add924803b82\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.719431 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c-serving-cert\") pod \"etcd-operator-b45778765-9jdl7\" (UID: \"c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9jdl7" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720203 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-client-ca\") pod \"controller-manager-879f6c89f-ztstf\" (UID: \"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720258 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720280 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqllt\" (UniqueName: \"kubernetes.io/projected/5977586b-6538-4050-bfde-dde62e4d87cd-kube-api-access-hqllt\") pod \"image-pruner-29522880-n4bvp\" (UID: \"5977586b-6538-4050-bfde-dde62e4d87cd\") " pod="openshift-image-registry/image-pruner-29522880-n4bvp" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720296 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2798c53f-d277-411d-b95d-3439db650d71-trusted-ca\") pod \"ingress-operator-5b745b69d9-n879l\" (UID: \"2798c53f-d277-411d-b95d-3439db650d71\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n879l" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720313 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a4447453-79a5-4008-89ec-add924803b82-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6rx4q\" (UID: \"a4447453-79a5-4008-89ec-add924803b82\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720362 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44c5ceae-0c80-4b01-a773-8c222c900f34-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zk6hd\" (UID: \"44c5ceae-0c80-4b01-a773-8c222c900f34\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zk6hd" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720380 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c-config\") pod \"etcd-operator-b45778765-9jdl7\" (UID: \"c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9jdl7" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720409 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9fcbc6ed-02b2-4c98-b9c6-cb4e70a0fc6b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vzdrt\" (UID: \"9fcbc6ed-02b2-4c98-b9c6-cb4e70a0fc6b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vzdrt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720436 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a82bb6ce-4801-417a-a4e2-93d1667999ee-oauth-serving-cert\") pod \"console-f9d7485db-lpg4n\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720452 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f95d7fd9-797f-464f-ac5e-e78c353e78ee-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wnh9x\" (UID: \"f95d7fd9-797f-464f-ac5e-e78c353e78ee\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wnh9x" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720478 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720510 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383-config\") pod \"route-controller-manager-6576b87f9c-jx5ng\" (UID: \"ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720595 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcdnr\" (UniqueName: \"kubernetes.io/projected/c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c-kube-api-access-wcdnr\") pod \"etcd-operator-b45778765-9jdl7\" (UID: \"c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9jdl7" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720624 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/646ba69d-8375-436b-a16f-e7bae5475ac6-serving-cert\") pod \"console-operator-58897d9998-gtw2k\" (UID: \"646ba69d-8375-436b-a16f-e7bae5475ac6\") " pod="openshift-console-operator/console-operator-58897d9998-gtw2k" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720657 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njhvx\" (UniqueName: \"kubernetes.io/projected/92e95ff1-a825-4d17-825f-f4765353a5f2-kube-api-access-njhvx\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720675 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/93a92f34-d9a8-4276-8a97-3f129c4db452-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-574cj\" (UID: \"93a92f34-d9a8-4276-8a97-3f129c4db452\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-574cj" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720697 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08ad236d-4644-4e49-b9d5-194b2746a760-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mkz5l\" (UID: \"08ad236d-4644-4e49-b9d5-194b2746a760\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mkz5l" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720710 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4447453-79a5-4008-89ec-add924803b82-serving-cert\") pod \"apiserver-7bbb656c7d-6rx4q\" (UID: \"a4447453-79a5-4008-89ec-add924803b82\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720729 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6269b5c1-c01e-4f81-8c44-94455a9cc858-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-27k9h\" (UID: \"6269b5c1-c01e-4f81-8c44-94455a9cc858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-27k9h" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720761 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krmrf\" (UniqueName: \"kubernetes.io/projected/eef415f0-0fe2-4c5c-a528-3394ce644ff1-kube-api-access-krmrf\") pod \"machine-config-operator-74547568cd-54bhc\" (UID: \"eef415f0-0fe2-4c5c-a528-3394ce644ff1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-54bhc" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720776 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44c5ceae-0c80-4b01-a773-8c222c900f34-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zk6hd\" (UID: \"44c5ceae-0c80-4b01-a773-8c222c900f34\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zk6hd" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720794 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1963f275-b68a-4539-9371-8a38bafa03eb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-c7nlk\" (UID: \"1963f275-b68a-4539-9371-8a38bafa03eb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7nlk" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720810 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12b8a5f7-869c-4343-8224-ae76d73073cf-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8drdx\" (UID: \"12b8a5f7-869c-4343-8224-ae76d73073cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8drdx" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720826 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqdgp\" (UniqueName: \"kubernetes.io/projected/b132ff06-2a28-42df-b43b-f923a76b4cca-kube-api-access-lqdgp\") pod \"downloads-7954f5f757-6djsl\" (UID: \"b132ff06-2a28-42df-b43b-f923a76b4cca\") " pod="openshift-console/downloads-7954f5f757-6djsl" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720857 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383-serving-cert\") pod \"route-controller-manager-6576b87f9c-jx5ng\" (UID: \"ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720874 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720889 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6mc7\" (UniqueName: \"kubernetes.io/projected/08ad236d-4644-4e49-b9d5-194b2746a760-kube-api-access-q6mc7\") pod \"openshift-controller-manager-operator-756b6f6bc6-mkz5l\" (UID: \"08ad236d-4644-4e49-b9d5-194b2746a760\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mkz5l" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720904 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c-etcd-client\") pod \"etcd-operator-b45778765-9jdl7\" (UID: \"c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9jdl7" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720918 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cp54\" (UniqueName: \"kubernetes.io/projected/93a92f34-d9a8-4276-8a97-3f129c4db452-kube-api-access-2cp54\") pod \"control-plane-machine-set-operator-78cbb6b69f-574cj\" (UID: \"93a92f34-d9a8-4276-8a97-3f129c4db452\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-574cj" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720935 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720950 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/896376a1-7809-4597-a315-2089547c2f89-machine-approver-tls\") pod \"machine-approver-56656f9798-kzbz5\" (UID: \"896376a1-7809-4597-a315-2089547c2f89\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kzbz5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.720964 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4zs9\" (UniqueName: \"kubernetes.io/projected/896376a1-7809-4597-a315-2089547c2f89-kube-api-access-t4zs9\") pod \"machine-approver-56656f9798-kzbz5\" (UID: \"896376a1-7809-4597-a315-2089547c2f89\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kzbz5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.721000 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a82bb6ce-4801-417a-a4e2-93d1667999ee-console-config\") pod \"console-f9d7485db-lpg4n\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.721018 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mftm7\" (UniqueName: \"kubernetes.io/projected/646ba69d-8375-436b-a16f-e7bae5475ac6-kube-api-access-mftm7\") pod \"console-operator-58897d9998-gtw2k\" (UID: \"646ba69d-8375-436b-a16f-e7bae5475ac6\") " pod="openshift-console-operator/console-operator-58897d9998-gtw2k" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.721037 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a82bb6ce-4801-417a-a4e2-93d1667999ee-console-serving-cert\") pod \"console-f9d7485db-lpg4n\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.721054 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f95d7fd9-797f-464f-ac5e-e78c353e78ee-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wnh9x\" (UID: \"f95d7fd9-797f-464f-ac5e-e78c353e78ee\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wnh9x" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.721072 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2798c53f-d277-411d-b95d-3439db650d71-bound-sa-token\") pod \"ingress-operator-5b745b69d9-n879l\" (UID: \"2798c53f-d277-411d-b95d-3439db650d71\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n879l" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.721087 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a4447453-79a5-4008-89ec-add924803b82-audit-policies\") pod \"apiserver-7bbb656c7d-6rx4q\" (UID: \"a4447453-79a5-4008-89ec-add924803b82\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.721105 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1963f275-b68a-4539-9371-8a38bafa03eb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-c7nlk\" (UID: \"1963f275-b68a-4539-9371-8a38bafa03eb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7nlk" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.721119 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12b8a5f7-869c-4343-8224-ae76d73073cf-config\") pod \"kube-controller-manager-operator-78b949d7b-8drdx\" (UID: \"12b8a5f7-869c-4343-8224-ae76d73073cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8drdx" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.721134 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c-etcd-ca\") pod \"etcd-operator-b45778765-9jdl7\" (UID: \"c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9jdl7" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.721155 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eef415f0-0fe2-4c5c-a528-3394ce644ff1-images\") pod \"machine-config-operator-74547568cd-54bhc\" (UID: \"eef415f0-0fe2-4c5c-a528-3394ce644ff1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-54bhc" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.721218 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkrps\" (UniqueName: \"kubernetes.io/projected/a82bb6ce-4801-417a-a4e2-93d1667999ee-kube-api-access-fkrps\") pod \"console-f9d7485db-lpg4n\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.721234 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ztstf\" (UID: \"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.721250 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/646ba69d-8375-436b-a16f-e7bae5475ac6-trusted-ca\") pod \"console-operator-58897d9998-gtw2k\" (UID: \"646ba69d-8375-436b-a16f-e7bae5475ac6\") " pod="openshift-console-operator/console-operator-58897d9998-gtw2k" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.721267 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c607470-6245-49d5-9509-009900f0adef-config\") pod \"authentication-operator-69f744f599-s76q5\" (UID: \"0c607470-6245-49d5-9509-009900f0adef\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s76q5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.721281 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c607470-6245-49d5-9509-009900f0adef-serving-cert\") pod \"authentication-operator-69f744f599-s76q5\" (UID: \"0c607470-6245-49d5-9509-009900f0adef\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s76q5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.721296 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl5vk\" (UniqueName: \"kubernetes.io/projected/0c607470-6245-49d5-9509-009900f0adef-kube-api-access-sl5vk\") pod \"authentication-operator-69f744f599-s76q5\" (UID: \"0c607470-6245-49d5-9509-009900f0adef\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s76q5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.721313 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-config\") pod \"controller-manager-879f6c89f-ztstf\" (UID: \"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.721330 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.721348 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/849a4228-f4ae-4b7f-a2c8-5db413e4dd28-metrics-tls\") pod \"dns-operator-744455d44c-kqbdg\" (UID: \"849a4228-f4ae-4b7f-a2c8-5db413e4dd28\") " pod="openshift-dns-operator/dns-operator-744455d44c-kqbdg" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.721362 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r75br\" (UniqueName: \"kubernetes.io/projected/a4447453-79a5-4008-89ec-add924803b82-kube-api-access-r75br\") pod \"apiserver-7bbb656c7d-6rx4q\" (UID: \"a4447453-79a5-4008-89ec-add924803b82\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.721378 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcnh9\" (UniqueName: \"kubernetes.io/projected/dac5a4df-1236-446c-91c4-1521fb88d2f4-kube-api-access-bcnh9\") pod \"cluster-image-registry-operator-dc59b4c8b-4l24x\" (UID: \"dac5a4df-1236-446c-91c4-1521fb88d2f4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4l24x" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.721394 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eef415f0-0fe2-4c5c-a528-3394ce644ff1-proxy-tls\") pod \"machine-config-operator-74547568cd-54bhc\" (UID: \"eef415f0-0fe2-4c5c-a528-3394ce644ff1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-54bhc" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.721477 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383-client-ca\") pod \"route-controller-manager-6576b87f9c-jx5ng\" (UID: \"ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.721783 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a82bb6ce-4801-417a-a4e2-93d1667999ee-oauth-serving-cert\") pod \"console-f9d7485db-lpg4n\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.721976 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.722513 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a82bb6ce-4801-417a-a4e2-93d1667999ee-console-config\") pod \"console-f9d7485db-lpg4n\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.725437 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6269b5c1-c01e-4f81-8c44-94455a9cc858-images\") pod \"machine-api-operator-5694c8668f-27k9h\" (UID: \"6269b5c1-c01e-4f81-8c44-94455a9cc858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-27k9h" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.726296 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a82bb6ce-4801-417a-a4e2-93d1667999ee-console-oauth-config\") pod \"console-f9d7485db-lpg4n\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.726694 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ztstf\" (UID: \"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.727840 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-serving-cert\") pod \"controller-manager-879f6c89f-ztstf\" (UID: \"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.728002 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-config\") pod \"controller-manager-879f6c89f-ztstf\" (UID: \"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.728320 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c607470-6245-49d5-9509-009900f0adef-config\") pod \"authentication-operator-69f744f599-s76q5\" (UID: \"0c607470-6245-49d5-9509-009900f0adef\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s76q5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.729369 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9fcbc6ed-02b2-4c98-b9c6-cb4e70a0fc6b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vzdrt\" (UID: \"9fcbc6ed-02b2-4c98-b9c6-cb4e70a0fc6b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vzdrt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.731010 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1963f275-b68a-4539-9371-8a38bafa03eb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-c7nlk\" (UID: \"1963f275-b68a-4539-9371-8a38bafa03eb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7nlk" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.731045 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a82bb6ce-4801-417a-a4e2-93d1667999ee-console-serving-cert\") pod \"console-f9d7485db-lpg4n\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.731459 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383-serving-cert\") pod \"route-controller-manager-6576b87f9c-jx5ng\" (UID: \"ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.733594 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7nlk"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.733631 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-n2kdn"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.734145 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjmlv"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.734644 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjmlv" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.734854 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-n2kdn" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.736648 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c607470-6245-49d5-9509-009900f0adef-serving-cert\") pod \"authentication-operator-69f744f599-s76q5\" (UID: \"0c607470-6245-49d5-9509-009900f0adef\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s76q5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.736712 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wswhj"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.739887 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1963f275-b68a-4539-9371-8a38bafa03eb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-c7nlk\" (UID: \"1963f275-b68a-4539-9371-8a38bafa03eb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7nlk" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.767093 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6269b5c1-c01e-4f81-8c44-94455a9cc858-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-27k9h\" (UID: \"6269b5c1-c01e-4f81-8c44-94455a9cc858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-27k9h" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.770092 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.770320 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9dqf6"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.771894 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wswhj" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.774568 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.774711 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.774810 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-mxwrb"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.775091 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9dqf6" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.775643 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-smdtb"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.775717 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-mxwrb" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.776336 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7l2r7"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.776446 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-smdtb" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.776875 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7l2r7" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.777004 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b772q"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.777683 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b772q" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.777948 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-kp8d2"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.778231 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.778306 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kp8d2" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.779274 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mfftk"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.779902 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mfftk" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.780142 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522910-dtdsw"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.780535 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-dtdsw" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.781206 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-g72m8"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.782245 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-nmn6s"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.783799 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mkz5l"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.783804 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-g72m8" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.783970 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.784478 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zk6hd"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.785547 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cjd57"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.786462 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vzdrt"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.787414 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.788945 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ztstf"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.789602 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-lpg4n"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.790632 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8drdx"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.791562 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4l24x"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.792665 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-9jdl7"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.793619 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-27k9h"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.794871 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-n879l"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.796082 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wmw2"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.797080 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-54bhc"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.798668 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.798797 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p8987"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.800079 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-kqbdg"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.801060 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wnh9x"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.801874 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-26v6w"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.802615 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-26v6w" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.802818 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-jnlh5"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.803642 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jnlh5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.803821 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29522880-n4bvp"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.805240 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522910-dtdsw"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.806255 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bc7mz"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.807565 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6djsl"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.808564 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-574cj"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.809944 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-smdtb"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.810955 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-nmn6s"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.812143 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjmlv"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.813103 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gtw2k"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.814108 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-c7bmt"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.815621 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b772q"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.817922 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-n2kdn"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.819228 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.819346 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qf4s8"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.820406 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mfftk"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.821886 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jnlh5"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822218 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvvr8\" (UniqueName: \"kubernetes.io/projected/849a4228-f4ae-4b7f-a2c8-5db413e4dd28-kube-api-access-qvvr8\") pod \"dns-operator-744455d44c-kqbdg\" (UID: \"849a4228-f4ae-4b7f-a2c8-5db413e4dd28\") " pod="openshift-dns-operator/dns-operator-744455d44c-kqbdg" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822244 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4447453-79a5-4008-89ec-add924803b82-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6rx4q\" (UID: \"a4447453-79a5-4008-89ec-add924803b82\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822263 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/64aaf596-bd11-435d-97ae-0c02f0f93c9f-etcd-serving-ca\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822280 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08ad236d-4644-4e49-b9d5-194b2746a760-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mkz5l\" (UID: \"08ad236d-4644-4e49-b9d5-194b2746a760\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mkz5l" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822306 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64aaf596-bd11-435d-97ae-0c02f0f93c9f-serving-cert\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822328 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5977586b-6538-4050-bfde-dde62e4d87cd-serviceca\") pod \"image-pruner-29522880-n4bvp\" (UID: \"5977586b-6538-4050-bfde-dde62e4d87cd\") " pod="openshift-image-registry/image-pruner-29522880-n4bvp" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822360 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64aaf596-bd11-435d-97ae-0c02f0f93c9f-config\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822379 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d56a2ef9-2679-43f8-bf70-3b8f1eea8c70-profile-collector-cert\") pod \"catalog-operator-68c6474976-9dqf6\" (UID: \"d56a2ef9-2679-43f8-bf70-3b8f1eea8c70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9dqf6" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822394 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/896376a1-7809-4597-a315-2089547c2f89-config\") pod \"machine-approver-56656f9798-kzbz5\" (UID: \"896376a1-7809-4597-a315-2089547c2f89\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kzbz5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822411 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqllt\" (UniqueName: \"kubernetes.io/projected/5977586b-6538-4050-bfde-dde62e4d87cd-kube-api-access-hqllt\") pod \"image-pruner-29522880-n4bvp\" (UID: \"5977586b-6538-4050-bfde-dde62e4d87cd\") " pod="openshift-image-registry/image-pruner-29522880-n4bvp" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822442 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2798c53f-d277-411d-b95d-3439db650d71-trusted-ca\") pod \"ingress-operator-5b745b69d9-n879l\" (UID: \"2798c53f-d277-411d-b95d-3439db650d71\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n879l" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822464 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c-serving-cert\") pod \"etcd-operator-b45778765-9jdl7\" (UID: \"c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9jdl7" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822481 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qb44\" (UniqueName: \"kubernetes.io/projected/afbe6075-e81f-464a-bfb5-7e97510ee945-kube-api-access-5qb44\") pod \"router-default-5444994796-mxwrb\" (UID: \"afbe6075-e81f-464a-bfb5-7e97510ee945\") " pod="openshift-ingress/router-default-5444994796-mxwrb" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822510 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f8506161-354f-42a0-8f15-9c02ba3fe215-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pjmlv\" (UID: \"f8506161-354f-42a0-8f15-9c02ba3fe215\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjmlv" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822529 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4mtt\" (UniqueName: \"kubernetes.io/projected/a3cba07a-2fd4-4794-bae6-53b73a54905a-kube-api-access-j4mtt\") pod \"multus-admission-controller-857f4d67dd-n2kdn\" (UID: \"a3cba07a-2fd4-4794-bae6-53b73a54905a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n2kdn" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822547 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bljcd\" (UniqueName: \"kubernetes.io/projected/64aaf596-bd11-435d-97ae-0c02f0f93c9f-kube-api-access-bljcd\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822572 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822600 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcdnr\" (UniqueName: \"kubernetes.io/projected/c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c-kube-api-access-wcdnr\") pod \"etcd-operator-b45778765-9jdl7\" (UID: \"c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9jdl7" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822616 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pdvw\" (UniqueName: \"kubernetes.io/projected/f8506161-354f-42a0-8f15-9c02ba3fe215-kube-api-access-4pdvw\") pod \"olm-operator-6b444d44fb-pjmlv\" (UID: \"f8506161-354f-42a0-8f15-9c02ba3fe215\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjmlv" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822631 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a3cba07a-2fd4-4794-bae6-53b73a54905a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-n2kdn\" (UID: \"a3cba07a-2fd4-4794-bae6-53b73a54905a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n2kdn" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822647 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/646ba69d-8375-436b-a16f-e7bae5475ac6-serving-cert\") pod \"console-operator-58897d9998-gtw2k\" (UID: \"646ba69d-8375-436b-a16f-e7bae5475ac6\") " pod="openshift-console-operator/console-operator-58897d9998-gtw2k" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822662 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/afbe6075-e81f-464a-bfb5-7e97510ee945-stats-auth\") pod \"router-default-5444994796-mxwrb\" (UID: \"afbe6075-e81f-464a-bfb5-7e97510ee945\") " pod="openshift-ingress/router-default-5444994796-mxwrb" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822676 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4447453-79a5-4008-89ec-add924803b82-serving-cert\") pod \"apiserver-7bbb656c7d-6rx4q\" (UID: \"a4447453-79a5-4008-89ec-add924803b82\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822691 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njhvx\" (UniqueName: \"kubernetes.io/projected/92e95ff1-a825-4d17-825f-f4765353a5f2-kube-api-access-njhvx\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822706 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krmrf\" (UniqueName: \"kubernetes.io/projected/eef415f0-0fe2-4c5c-a528-3394ce644ff1-kube-api-access-krmrf\") pod \"machine-config-operator-74547568cd-54bhc\" (UID: \"eef415f0-0fe2-4c5c-a528-3394ce644ff1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-54bhc" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822721 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44c5ceae-0c80-4b01-a773-8c222c900f34-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zk6hd\" (UID: \"44c5ceae-0c80-4b01-a773-8c222c900f34\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zk6hd" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822736 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/64aaf596-bd11-435d-97ae-0c02f0f93c9f-image-import-ca\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822752 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6mc7\" (UniqueName: \"kubernetes.io/projected/08ad236d-4644-4e49-b9d5-194b2746a760-kube-api-access-q6mc7\") pod \"openshift-controller-manager-operator-756b6f6bc6-mkz5l\" (UID: \"08ad236d-4644-4e49-b9d5-194b2746a760\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mkz5l" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822855 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822871 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/896376a1-7809-4597-a315-2089547c2f89-machine-approver-tls\") pod \"machine-approver-56656f9798-kzbz5\" (UID: \"896376a1-7809-4597-a315-2089547c2f89\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kzbz5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822895 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4zs9\" (UniqueName: \"kubernetes.io/projected/896376a1-7809-4597-a315-2089547c2f89-kube-api-access-t4zs9\") pod \"machine-approver-56656f9798-kzbz5\" (UID: \"896376a1-7809-4597-a315-2089547c2f89\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kzbz5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822909 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cp54\" (UniqueName: \"kubernetes.io/projected/93a92f34-d9a8-4276-8a97-3f129c4db452-kube-api-access-2cp54\") pod \"control-plane-machine-set-operator-78cbb6b69f-574cj\" (UID: \"93a92f34-d9a8-4276-8a97-3f129c4db452\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-574cj" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822925 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f95d7fd9-797f-464f-ac5e-e78c353e78ee-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wnh9x\" (UID: \"f95d7fd9-797f-464f-ac5e-e78c353e78ee\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wnh9x" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822939 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2798c53f-d277-411d-b95d-3439db650d71-bound-sa-token\") pod \"ingress-operator-5b745b69d9-n879l\" (UID: \"2798c53f-d277-411d-b95d-3439db650d71\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n879l" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822954 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a4447453-79a5-4008-89ec-add924803b82-audit-policies\") pod \"apiserver-7bbb656c7d-6rx4q\" (UID: \"a4447453-79a5-4008-89ec-add924803b82\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822971 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afbe6075-e81f-464a-bfb5-7e97510ee945-service-ca-bundle\") pod \"router-default-5444994796-mxwrb\" (UID: \"afbe6075-e81f-464a-bfb5-7e97510ee945\") " pod="openshift-ingress/router-default-5444994796-mxwrb" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.822993 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eef415f0-0fe2-4c5c-a528-3394ce644ff1-images\") pod \"machine-config-operator-74547568cd-54bhc\" (UID: \"eef415f0-0fe2-4c5c-a528-3394ce644ff1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-54bhc" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823017 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/646ba69d-8375-436b-a16f-e7bae5475ac6-trusted-ca\") pod \"console-operator-58897d9998-gtw2k\" (UID: \"646ba69d-8375-436b-a16f-e7bae5475ac6\") " pod="openshift-console-operator/console-operator-58897d9998-gtw2k" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823031 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/849a4228-f4ae-4b7f-a2c8-5db413e4dd28-metrics-tls\") pod \"dns-operator-744455d44c-kqbdg\" (UID: \"849a4228-f4ae-4b7f-a2c8-5db413e4dd28\") " pod="openshift-dns-operator/dns-operator-744455d44c-kqbdg" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823081 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eef415f0-0fe2-4c5c-a528-3394ce644ff1-proxy-tls\") pod \"machine-config-operator-74547568cd-54bhc\" (UID: \"eef415f0-0fe2-4c5c-a528-3394ce644ff1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-54bhc" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823104 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64aaf596-bd11-435d-97ae-0c02f0f93c9f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823128 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12b8a5f7-869c-4343-8224-ae76d73073cf-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8drdx\" (UID: \"12b8a5f7-869c-4343-8224-ae76d73073cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8drdx" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823143 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823158 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d56a2ef9-2679-43f8-bf70-3b8f1eea8c70-srv-cert\") pod \"catalog-operator-68c6474976-9dqf6\" (UID: \"d56a2ef9-2679-43f8-bf70-3b8f1eea8c70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9dqf6" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823234 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a4447453-79a5-4008-89ec-add924803b82-encryption-config\") pod \"apiserver-7bbb656c7d-6rx4q\" (UID: \"a4447453-79a5-4008-89ec-add924803b82\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823251 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823282 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/64aaf596-bd11-435d-97ae-0c02f0f93c9f-etcd-client\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823297 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5bd9f27-973a-4ec3-91b8-87c2c20c6c34-config-volume\") pod \"collect-profiles-29522910-dtdsw\" (UID: \"a5bd9f27-973a-4ec3-91b8-87c2c20c6c34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-dtdsw" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823317 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqpcl\" (UniqueName: \"kubernetes.io/projected/44c5ceae-0c80-4b01-a773-8c222c900f34-kube-api-access-zqpcl\") pod \"kube-storage-version-migrator-operator-b67b599dd-zk6hd\" (UID: \"44c5ceae-0c80-4b01-a773-8c222c900f34\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zk6hd" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823332 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9bn8\" (UniqueName: \"kubernetes.io/projected/b0bd9345-840f-40e8-946d-b646e19a6b39-kube-api-access-s9bn8\") pod \"migrator-59844c95c7-c7bmt\" (UID: \"b0bd9345-840f-40e8-946d-b646e19a6b39\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c7bmt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823346 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/646ba69d-8375-436b-a16f-e7bae5475ac6-config\") pod \"console-operator-58897d9998-gtw2k\" (UID: \"646ba69d-8375-436b-a16f-e7bae5475ac6\") " pod="openshift-console-operator/console-operator-58897d9998-gtw2k" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823370 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c-etcd-service-ca\") pod \"etcd-operator-b45778765-9jdl7\" (UID: \"c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9jdl7" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823404 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2798c53f-d277-411d-b95d-3439db650d71-metrics-tls\") pod \"ingress-operator-5b745b69d9-n879l\" (UID: \"2798c53f-d277-411d-b95d-3439db650d71\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n879l" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823419 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5bd9f27-973a-4ec3-91b8-87c2c20c6c34-secret-volume\") pod \"collect-profiles-29522910-dtdsw\" (UID: \"a5bd9f27-973a-4ec3-91b8-87c2c20c6c34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-dtdsw" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823434 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f95d7fd9-797f-464f-ac5e-e78c353e78ee-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wnh9x\" (UID: \"f95d7fd9-797f-464f-ac5e-e78c353e78ee\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wnh9x" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823449 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xskds\" (UniqueName: \"kubernetes.io/projected/2798c53f-d277-411d-b95d-3439db650d71-kube-api-access-xskds\") pod \"ingress-operator-5b745b69d9-n879l\" (UID: \"2798c53f-d277-411d-b95d-3439db650d71\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n879l" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823462 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a4447453-79a5-4008-89ec-add924803b82-etcd-client\") pod \"apiserver-7bbb656c7d-6rx4q\" (UID: \"a4447453-79a5-4008-89ec-add924803b82\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823515 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/64aaf596-bd11-435d-97ae-0c02f0f93c9f-audit-dir\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823546 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823567 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stnrw\" (UniqueName: \"kubernetes.io/projected/d56a2ef9-2679-43f8-bf70-3b8f1eea8c70-kube-api-access-stnrw\") pod \"catalog-operator-68c6474976-9dqf6\" (UID: \"d56a2ef9-2679-43f8-bf70-3b8f1eea8c70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9dqf6" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823762 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823779 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92e95ff1-a825-4d17-825f-f4765353a5f2-audit-policies\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823823 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a4447453-79a5-4008-89ec-add924803b82-audit-dir\") pod \"apiserver-7bbb656c7d-6rx4q\" (UID: \"a4447453-79a5-4008-89ec-add924803b82\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823840 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/64aaf596-bd11-435d-97ae-0c02f0f93c9f-audit\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823894 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823911 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a4447453-79a5-4008-89ec-add924803b82-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6rx4q\" (UID: \"a4447453-79a5-4008-89ec-add924803b82\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823964 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44c5ceae-0c80-4b01-a773-8c222c900f34-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zk6hd\" (UID: \"44c5ceae-0c80-4b01-a773-8c222c900f34\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zk6hd" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.823981 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c-config\") pod \"etcd-operator-b45778765-9jdl7\" (UID: \"c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9jdl7" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.824008 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f95d7fd9-797f-464f-ac5e-e78c353e78ee-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wnh9x\" (UID: \"f95d7fd9-797f-464f-ac5e-e78c353e78ee\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wnh9x" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.824054 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08ad236d-4644-4e49-b9d5-194b2746a760-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mkz5l\" (UID: \"08ad236d-4644-4e49-b9d5-194b2746a760\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mkz5l" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.824070 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/93a92f34-d9a8-4276-8a97-3f129c4db452-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-574cj\" (UID: \"93a92f34-d9a8-4276-8a97-3f129c4db452\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-574cj" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.824088 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12b8a5f7-869c-4343-8224-ae76d73073cf-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8drdx\" (UID: \"12b8a5f7-869c-4343-8224-ae76d73073cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8drdx" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.824108 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqdgp\" (UniqueName: \"kubernetes.io/projected/b132ff06-2a28-42df-b43b-f923a76b4cca-kube-api-access-lqdgp\") pod \"downloads-7954f5f757-6djsl\" (UID: \"b132ff06-2a28-42df-b43b-f923a76b4cca\") " pod="openshift-console/downloads-7954f5f757-6djsl" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.824209 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c-etcd-client\") pod \"etcd-operator-b45778765-9jdl7\" (UID: \"c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9jdl7" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.824226 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.824416 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mftm7\" (UniqueName: \"kubernetes.io/projected/646ba69d-8375-436b-a16f-e7bae5475ac6-kube-api-access-mftm7\") pod \"console-operator-58897d9998-gtw2k\" (UID: \"646ba69d-8375-436b-a16f-e7bae5475ac6\") " pod="openshift-console-operator/console-operator-58897d9998-gtw2k" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.824434 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12b8a5f7-869c-4343-8224-ae76d73073cf-config\") pod \"kube-controller-manager-operator-78b949d7b-8drdx\" (UID: \"12b8a5f7-869c-4343-8224-ae76d73073cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8drdx" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.824448 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c-etcd-ca\") pod \"etcd-operator-b45778765-9jdl7\" (UID: \"c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9jdl7" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.824463 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/64aaf596-bd11-435d-97ae-0c02f0f93c9f-encryption-config\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.824489 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.824518 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r75br\" (UniqueName: \"kubernetes.io/projected/a4447453-79a5-4008-89ec-add924803b82-kube-api-access-r75br\") pod \"apiserver-7bbb656c7d-6rx4q\" (UID: \"a4447453-79a5-4008-89ec-add924803b82\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.824615 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/64aaf596-bd11-435d-97ae-0c02f0f93c9f-node-pullsecrets\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.824631 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f8506161-354f-42a0-8f15-9c02ba3fe215-srv-cert\") pod \"olm-operator-6b444d44fb-pjmlv\" (UID: \"f8506161-354f-42a0-8f15-9c02ba3fe215\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjmlv" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.824670 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a4447453-79a5-4008-89ec-add924803b82-audit-dir\") pod \"apiserver-7bbb656c7d-6rx4q\" (UID: \"a4447453-79a5-4008-89ec-add924803b82\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.824683 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/896376a1-7809-4597-a315-2089547c2f89-auth-proxy-config\") pod \"machine-approver-56656f9798-kzbz5\" (UID: \"896376a1-7809-4597-a315-2089547c2f89\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kzbz5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.824719 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eef415f0-0fe2-4c5c-a528-3394ce644ff1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-54bhc\" (UID: \"eef415f0-0fe2-4c5c-a528-3394ce644ff1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-54bhc" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.824737 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5496t\" (UniqueName: \"kubernetes.io/projected/a5bd9f27-973a-4ec3-91b8-87c2c20c6c34-kube-api-access-5496t\") pod \"collect-profiles-29522910-dtdsw\" (UID: \"a5bd9f27-973a-4ec3-91b8-87c2c20c6c34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-dtdsw" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.824765 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92e95ff1-a825-4d17-825f-f4765353a5f2-audit-dir\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.824795 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.824828 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/afbe6075-e81f-464a-bfb5-7e97510ee945-default-certificate\") pod \"router-default-5444994796-mxwrb\" (UID: \"afbe6075-e81f-464a-bfb5-7e97510ee945\") " pod="openshift-ingress/router-default-5444994796-mxwrb" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.824860 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4447453-79a5-4008-89ec-add924803b82-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6rx4q\" (UID: \"a4447453-79a5-4008-89ec-add924803b82\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.825158 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a4447453-79a5-4008-89ec-add924803b82-audit-policies\") pod \"apiserver-7bbb656c7d-6rx4q\" (UID: \"a4447453-79a5-4008-89ec-add924803b82\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.825537 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08ad236d-4644-4e49-b9d5-194b2746a760-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mkz5l\" (UID: \"08ad236d-4644-4e49-b9d5-194b2746a760\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mkz5l" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.825874 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7l2r7"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.825904 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wswhj"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.826168 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2798c53f-d277-411d-b95d-3439db650d71-trusted-ca\") pod \"ingress-operator-5b745b69d9-n879l\" (UID: \"2798c53f-d277-411d-b95d-3439db650d71\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n879l" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.826181 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a4447453-79a5-4008-89ec-add924803b82-encryption-config\") pod \"apiserver-7bbb656c7d-6rx4q\" (UID: \"a4447453-79a5-4008-89ec-add924803b82\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.826187 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/646ba69d-8375-436b-a16f-e7bae5475ac6-trusted-ca\") pod \"console-operator-58897d9998-gtw2k\" (UID: \"646ba69d-8375-436b-a16f-e7bae5475ac6\") " pod="openshift-console-operator/console-operator-58897d9998-gtw2k" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.826545 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5977586b-6538-4050-bfde-dde62e4d87cd-serviceca\") pod \"image-pruner-29522880-n4bvp\" (UID: \"5977586b-6538-4050-bfde-dde62e4d87cd\") " pod="openshift-image-registry/image-pruner-29522880-n4bvp" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.824879 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/afbe6075-e81f-464a-bfb5-7e97510ee945-metrics-certs\") pod \"router-default-5444994796-mxwrb\" (UID: \"afbe6075-e81f-464a-bfb5-7e97510ee945\") " pod="openshift-ingress/router-default-5444994796-mxwrb" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.826633 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.826666 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/646ba69d-8375-436b-a16f-e7bae5475ac6-config\") pod \"console-operator-58897d9998-gtw2k\" (UID: \"646ba69d-8375-436b-a16f-e7bae5475ac6\") " pod="openshift-console-operator/console-operator-58897d9998-gtw2k" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.826770 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4447453-79a5-4008-89ec-add924803b82-serving-cert\") pod \"apiserver-7bbb656c7d-6rx4q\" (UID: \"a4447453-79a5-4008-89ec-add924803b82\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.827043 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12b8a5f7-869c-4343-8224-ae76d73073cf-config\") pod \"kube-controller-manager-operator-78b949d7b-8drdx\" (UID: \"12b8a5f7-869c-4343-8224-ae76d73073cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8drdx" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.827662 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a4447453-79a5-4008-89ec-add924803b82-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6rx4q\" (UID: \"a4447453-79a5-4008-89ec-add924803b82\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.827996 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.831174 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eef415f0-0fe2-4c5c-a528-3394ce644ff1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-54bhc\" (UID: \"eef415f0-0fe2-4c5c-a528-3394ce644ff1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-54bhc" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.831643 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c-etcd-client\") pod \"etcd-operator-b45778765-9jdl7\" (UID: \"c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9jdl7" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.833441 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c-serving-cert\") pod \"etcd-operator-b45778765-9jdl7\" (UID: \"c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9jdl7" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.833977 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2798c53f-d277-411d-b95d-3439db650d71-metrics-tls\") pod \"ingress-operator-5b745b69d9-n879l\" (UID: \"2798c53f-d277-411d-b95d-3439db650d71\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n879l" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.834232 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.834302 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/849a4228-f4ae-4b7f-a2c8-5db413e4dd28-metrics-tls\") pod \"dns-operator-744455d44c-kqbdg\" (UID: \"849a4228-f4ae-4b7f-a2c8-5db413e4dd28\") " pod="openshift-dns-operator/dns-operator-744455d44c-kqbdg" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.834882 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.834925 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92e95ff1-a825-4d17-825f-f4765353a5f2-audit-dir\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.834961 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92e95ff1-a825-4d17-825f-f4765353a5f2-audit-policies\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.835633 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.837035 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.838990 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44c5ceae-0c80-4b01-a773-8c222c900f34-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zk6hd\" (UID: \"44c5ceae-0c80-4b01-a773-8c222c900f34\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zk6hd" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.839596 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c-config\") pod \"etcd-operator-b45778765-9jdl7\" (UID: \"c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9jdl7" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.839974 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12b8a5f7-869c-4343-8224-ae76d73073cf-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8drdx\" (UID: \"12b8a5f7-869c-4343-8224-ae76d73073cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8drdx" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.840067 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.840199 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.840518 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/93a92f34-d9a8-4276-8a97-3f129c4db452-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-574cj\" (UID: \"93a92f34-d9a8-4276-8a97-3f129c4db452\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-574cj" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.841356 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.841703 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.842077 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.842579 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/646ba69d-8375-436b-a16f-e7bae5475ac6-serving-cert\") pod \"console-operator-58897d9998-gtw2k\" (UID: \"646ba69d-8375-436b-a16f-e7bae5475ac6\") " pod="openshift-console-operator/console-operator-58897d9998-gtw2k" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.842746 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.843734 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-kp8d2"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.845243 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08ad236d-4644-4e49-b9d5-194b2746a760-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mkz5l\" (UID: \"08ad236d-4644-4e49-b9d5-194b2746a760\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mkz5l" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.845322 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a4447453-79a5-4008-89ec-add924803b82-etcd-client\") pod \"apiserver-7bbb656c7d-6rx4q\" (UID: \"a4447453-79a5-4008-89ec-add924803b82\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.846566 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44c5ceae-0c80-4b01-a773-8c222c900f34-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zk6hd\" (UID: \"44c5ceae-0c80-4b01-a773-8c222c900f34\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zk6hd" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.846606 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.847414 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-26v6w"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.849107 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c-etcd-ca\") pod \"etcd-operator-b45778765-9jdl7\" (UID: \"c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9jdl7" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.849140 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9dqf6"] Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.859655 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.861583 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c-etcd-service-ca\") pod \"etcd-operator-b45778765-9jdl7\" (UID: \"c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9jdl7" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.878856 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.898783 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.907281 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/896376a1-7809-4597-a315-2089547c2f89-machine-approver-tls\") pod \"machine-approver-56656f9798-kzbz5\" (UID: \"896376a1-7809-4597-a315-2089547c2f89\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kzbz5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.918997 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.927799 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/afbe6075-e81f-464a-bfb5-7e97510ee945-metrics-certs\") pod \"router-default-5444994796-mxwrb\" (UID: \"afbe6075-e81f-464a-bfb5-7e97510ee945\") " pod="openshift-ingress/router-default-5444994796-mxwrb" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.927838 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/64aaf596-bd11-435d-97ae-0c02f0f93c9f-etcd-serving-ca\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.927889 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64aaf596-bd11-435d-97ae-0c02f0f93c9f-serving-cert\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.927924 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64aaf596-bd11-435d-97ae-0c02f0f93c9f-config\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.927954 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d56a2ef9-2679-43f8-bf70-3b8f1eea8c70-profile-collector-cert\") pod \"catalog-operator-68c6474976-9dqf6\" (UID: \"d56a2ef9-2679-43f8-bf70-3b8f1eea8c70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9dqf6" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.928007 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qb44\" (UniqueName: \"kubernetes.io/projected/afbe6075-e81f-464a-bfb5-7e97510ee945-kube-api-access-5qb44\") pod \"router-default-5444994796-mxwrb\" (UID: \"afbe6075-e81f-464a-bfb5-7e97510ee945\") " pod="openshift-ingress/router-default-5444994796-mxwrb" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.928024 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f8506161-354f-42a0-8f15-9c02ba3fe215-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pjmlv\" (UID: \"f8506161-354f-42a0-8f15-9c02ba3fe215\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjmlv" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.928050 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4mtt\" (UniqueName: \"kubernetes.io/projected/a3cba07a-2fd4-4794-bae6-53b73a54905a-kube-api-access-j4mtt\") pod \"multus-admission-controller-857f4d67dd-n2kdn\" (UID: \"a3cba07a-2fd4-4794-bae6-53b73a54905a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n2kdn" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.928067 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bljcd\" (UniqueName: \"kubernetes.io/projected/64aaf596-bd11-435d-97ae-0c02f0f93c9f-kube-api-access-bljcd\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.928104 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/afbe6075-e81f-464a-bfb5-7e97510ee945-stats-auth\") pod \"router-default-5444994796-mxwrb\" (UID: \"afbe6075-e81f-464a-bfb5-7e97510ee945\") " pod="openshift-ingress/router-default-5444994796-mxwrb" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.928122 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pdvw\" (UniqueName: \"kubernetes.io/projected/f8506161-354f-42a0-8f15-9c02ba3fe215-kube-api-access-4pdvw\") pod \"olm-operator-6b444d44fb-pjmlv\" (UID: \"f8506161-354f-42a0-8f15-9c02ba3fe215\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjmlv" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.928139 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a3cba07a-2fd4-4794-bae6-53b73a54905a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-n2kdn\" (UID: \"a3cba07a-2fd4-4794-bae6-53b73a54905a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n2kdn" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.928208 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/64aaf596-bd11-435d-97ae-0c02f0f93c9f-image-import-ca\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.928263 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afbe6075-e81f-464a-bfb5-7e97510ee945-service-ca-bundle\") pod \"router-default-5444994796-mxwrb\" (UID: \"afbe6075-e81f-464a-bfb5-7e97510ee945\") " pod="openshift-ingress/router-default-5444994796-mxwrb" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.928325 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64aaf596-bd11-435d-97ae-0c02f0f93c9f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.928355 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d56a2ef9-2679-43f8-bf70-3b8f1eea8c70-srv-cert\") pod \"catalog-operator-68c6474976-9dqf6\" (UID: \"d56a2ef9-2679-43f8-bf70-3b8f1eea8c70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9dqf6" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.928371 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/64aaf596-bd11-435d-97ae-0c02f0f93c9f-etcd-client\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.928389 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5bd9f27-973a-4ec3-91b8-87c2c20c6c34-config-volume\") pod \"collect-profiles-29522910-dtdsw\" (UID: \"a5bd9f27-973a-4ec3-91b8-87c2c20c6c34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-dtdsw" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.928423 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5bd9f27-973a-4ec3-91b8-87c2c20c6c34-secret-volume\") pod \"collect-profiles-29522910-dtdsw\" (UID: \"a5bd9f27-973a-4ec3-91b8-87c2c20c6c34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-dtdsw" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.928456 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/64aaf596-bd11-435d-97ae-0c02f0f93c9f-audit-dir\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.928473 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stnrw\" (UniqueName: \"kubernetes.io/projected/d56a2ef9-2679-43f8-bf70-3b8f1eea8c70-kube-api-access-stnrw\") pod \"catalog-operator-68c6474976-9dqf6\" (UID: \"d56a2ef9-2679-43f8-bf70-3b8f1eea8c70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9dqf6" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.928512 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/64aaf596-bd11-435d-97ae-0c02f0f93c9f-audit\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.928564 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/64aaf596-bd11-435d-97ae-0c02f0f93c9f-encryption-config\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.928553 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/64aaf596-bd11-435d-97ae-0c02f0f93c9f-audit-dir\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.928600 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/64aaf596-bd11-435d-97ae-0c02f0f93c9f-node-pullsecrets\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.928622 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f8506161-354f-42a0-8f15-9c02ba3fe215-srv-cert\") pod \"olm-operator-6b444d44fb-pjmlv\" (UID: \"f8506161-354f-42a0-8f15-9c02ba3fe215\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjmlv" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.928658 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5496t\" (UniqueName: \"kubernetes.io/projected/a5bd9f27-973a-4ec3-91b8-87c2c20c6c34-kube-api-access-5496t\") pod \"collect-profiles-29522910-dtdsw\" (UID: \"a5bd9f27-973a-4ec3-91b8-87c2c20c6c34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-dtdsw" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.928687 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/afbe6075-e81f-464a-bfb5-7e97510ee945-default-certificate\") pod \"router-default-5444994796-mxwrb\" (UID: \"afbe6075-e81f-464a-bfb5-7e97510ee945\") " pod="openshift-ingress/router-default-5444994796-mxwrb" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.928699 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/64aaf596-bd11-435d-97ae-0c02f0f93c9f-node-pullsecrets\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.947726 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.948731 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/896376a1-7809-4597-a315-2089547c2f89-auth-proxy-config\") pod \"machine-approver-56656f9798-kzbz5\" (UID: \"896376a1-7809-4597-a315-2089547c2f89\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kzbz5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.962008 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.964826 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/896376a1-7809-4597-a315-2089547c2f89-config\") pod \"machine-approver-56656f9798-kzbz5\" (UID: \"896376a1-7809-4597-a315-2089547c2f89\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kzbz5" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.978833 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 00:36:30 crc kubenswrapper[4858]: I0218 00:36:30.999023 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.018866 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.038908 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.060268 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.074656 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f95d7fd9-797f-464f-ac5e-e78c353e78ee-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wnh9x\" (UID: \"f95d7fd9-797f-464f-ac5e-e78c353e78ee\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wnh9x" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.079831 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.086558 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f95d7fd9-797f-464f-ac5e-e78c353e78ee-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wnh9x\" (UID: \"f95d7fd9-797f-464f-ac5e-e78c353e78ee\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wnh9x" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.119119 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.125738 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eef415f0-0fe2-4c5c-a528-3394ce644ff1-images\") pod \"machine-config-operator-74547568cd-54bhc\" (UID: \"eef415f0-0fe2-4c5c-a528-3394ce644ff1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-54bhc" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.139670 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.159860 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.172369 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eef415f0-0fe2-4c5c-a528-3394ce644ff1-proxy-tls\") pod \"machine-config-operator-74547568cd-54bhc\" (UID: \"eef415f0-0fe2-4c5c-a528-3394ce644ff1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-54bhc" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.178881 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.199663 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.221245 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.239363 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.259696 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.274095 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/64aaf596-bd11-435d-97ae-0c02f0f93c9f-encryption-config\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.279185 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.300070 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.313018 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/64aaf596-bd11-435d-97ae-0c02f0f93c9f-etcd-client\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.321188 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.333980 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64aaf596-bd11-435d-97ae-0c02f0f93c9f-serving-cert\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.339910 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.359061 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.379041 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.409583 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.419945 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.421109 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64aaf596-bd11-435d-97ae-0c02f0f93c9f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.439365 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.459836 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.469741 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64aaf596-bd11-435d-97ae-0c02f0f93c9f-config\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.479969 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.490365 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/64aaf596-bd11-435d-97ae-0c02f0f93c9f-audit\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.500273 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.509145 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/64aaf596-bd11-435d-97ae-0c02f0f93c9f-etcd-serving-ca\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.520390 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.529794 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/64aaf596-bd11-435d-97ae-0c02f0f93c9f-image-import-ca\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.540282 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.560109 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.579685 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.599382 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.646369 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4gqk\" (UniqueName: \"kubernetes.io/projected/9fcbc6ed-02b2-4c98-b9c6-cb4e70a0fc6b-kube-api-access-g4gqk\") pod \"cluster-samples-operator-665b6dd947-vzdrt\" (UID: \"9fcbc6ed-02b2-4c98-b9c6-cb4e70a0fc6b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vzdrt" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.668972 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4j94\" (UniqueName: \"kubernetes.io/projected/6269b5c1-c01e-4f81-8c44-94455a9cc858-kube-api-access-n4j94\") pod \"machine-api-operator-5694c8668f-27k9h\" (UID: \"6269b5c1-c01e-4f81-8c44-94455a9cc858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-27k9h" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.677752 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-27k9h" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.682398 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vzdrt" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.683546 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn29m\" (UniqueName: \"kubernetes.io/projected/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-kube-api-access-jn29m\") pod \"controller-manager-879f6c89f-ztstf\" (UID: \"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.697257 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwhxq\" (UniqueName: \"kubernetes.io/projected/ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383-kube-api-access-fwhxq\") pod \"route-controller-manager-6576b87f9c-jx5ng\" (UID: \"ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.720241 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz259\" (UniqueName: \"kubernetes.io/projected/1963f275-b68a-4539-9371-8a38bafa03eb-kube-api-access-gz259\") pod \"openshift-apiserver-operator-796bbdcf4f-c7nlk\" (UID: \"1963f275-b68a-4539-9371-8a38bafa03eb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7nlk" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.737041 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dac5a4df-1236-446c-91c4-1521fb88d2f4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4l24x\" (UID: \"dac5a4df-1236-446c-91c4-1521fb88d2f4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4l24x" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.737411 4858 request.go:700] Waited for 1.010533034s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/console/token Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.752263 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkrps\" (UniqueName: \"kubernetes.io/projected/a82bb6ce-4801-417a-a4e2-93d1667999ee-kube-api-access-fkrps\") pod \"console-f9d7485db-lpg4n\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.765973 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.773223 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcnh9\" (UniqueName: \"kubernetes.io/projected/dac5a4df-1236-446c-91c4-1521fb88d2f4-kube-api-access-bcnh9\") pod \"cluster-image-registry-operator-dc59b4c8b-4l24x\" (UID: \"dac5a4df-1236-446c-91c4-1521fb88d2f4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4l24x" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.800715 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl5vk\" (UniqueName: \"kubernetes.io/projected/0c607470-6245-49d5-9509-009900f0adef-kube-api-access-sl5vk\") pod \"authentication-operator-69f744f599-s76q5\" (UID: \"0c607470-6245-49d5-9509-009900f0adef\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s76q5" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.800866 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.805144 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-s76q5" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.823719 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.833040 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a3cba07a-2fd4-4794-bae6-53b73a54905a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-n2kdn\" (UID: \"a3cba07a-2fd4-4794-bae6-53b73a54905a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n2kdn" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.833834 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.840288 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.846122 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f8506161-354f-42a0-8f15-9c02ba3fe215-srv-cert\") pod \"olm-operator-6b444d44fb-pjmlv\" (UID: \"f8506161-354f-42a0-8f15-9c02ba3fe215\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjmlv" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.859094 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.868943 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f8506161-354f-42a0-8f15-9c02ba3fe215-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pjmlv\" (UID: \"f8506161-354f-42a0-8f15-9c02ba3fe215\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjmlv" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.875222 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d56a2ef9-2679-43f8-bf70-3b8f1eea8c70-profile-collector-cert\") pod \"catalog-operator-68c6474976-9dqf6\" (UID: \"d56a2ef9-2679-43f8-bf70-3b8f1eea8c70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9dqf6" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.875343 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5bd9f27-973a-4ec3-91b8-87c2c20c6c34-secret-volume\") pod \"collect-profiles-29522910-dtdsw\" (UID: \"a5bd9f27-973a-4ec3-91b8-87c2c20c6c34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-dtdsw" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.880252 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.894339 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7nlk" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.899012 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.906827 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vzdrt"] Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.910615 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.919308 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 00:36:31 crc kubenswrapper[4858]: E0218 00:36:31.930609 4858 secret.go:188] Couldn't get secret openshift-ingress/router-certs-default: failed to sync secret cache: timed out waiting for the condition Feb 18 00:36:31 crc kubenswrapper[4858]: E0218 00:36:31.930710 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afbe6075-e81f-464a-bfb5-7e97510ee945-default-certificate podName:afbe6075-e81f-464a-bfb5-7e97510ee945 nodeName:}" failed. No retries permitted until 2026-02-18 00:36:32.430684975 +0000 UTC m=+145.736521707 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/afbe6075-e81f-464a-bfb5-7e97510ee945-default-certificate") pod "router-default-5444994796-mxwrb" (UID: "afbe6075-e81f-464a-bfb5-7e97510ee945") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:36:31 crc kubenswrapper[4858]: E0218 00:36:31.931232 4858 secret.go:188] Couldn't get secret openshift-ingress/router-stats-default: failed to sync secret cache: timed out waiting for the condition Feb 18 00:36:31 crc kubenswrapper[4858]: E0218 00:36:31.931283 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afbe6075-e81f-464a-bfb5-7e97510ee945-stats-auth podName:afbe6075-e81f-464a-bfb5-7e97510ee945 nodeName:}" failed. No retries permitted until 2026-02-18 00:36:32.431259859 +0000 UTC m=+145.737096591 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/afbe6075-e81f-464a-bfb5-7e97510ee945-stats-auth") pod "router-default-5444994796-mxwrb" (UID: "afbe6075-e81f-464a-bfb5-7e97510ee945") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:36:31 crc kubenswrapper[4858]: E0218 00:36:31.931329 4858 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 18 00:36:31 crc kubenswrapper[4858]: E0218 00:36:31.931360 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/afbe6075-e81f-464a-bfb5-7e97510ee945-service-ca-bundle podName:afbe6075-e81f-464a-bfb5-7e97510ee945 nodeName:}" failed. No retries permitted until 2026-02-18 00:36:32.431353381 +0000 UTC m=+145.737190113 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/afbe6075-e81f-464a-bfb5-7e97510ee945-service-ca-bundle") pod "router-default-5444994796-mxwrb" (UID: "afbe6075-e81f-464a-bfb5-7e97510ee945") : failed to sync configmap cache: timed out waiting for the condition Feb 18 00:36:31 crc kubenswrapper[4858]: E0218 00:36:31.931394 4858 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Feb 18 00:36:31 crc kubenswrapper[4858]: E0218 00:36:31.931416 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a5bd9f27-973a-4ec3-91b8-87c2c20c6c34-config-volume podName:a5bd9f27-973a-4ec3-91b8-87c2c20c6c34 nodeName:}" failed. No retries permitted until 2026-02-18 00:36:32.431407122 +0000 UTC m=+145.737243854 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a5bd9f27-973a-4ec3-91b8-87c2c20c6c34-config-volume") pod "collect-profiles-29522910-dtdsw" (UID: "a5bd9f27-973a-4ec3-91b8-87c2c20c6c34") : failed to sync configmap cache: timed out waiting for the condition Feb 18 00:36:31 crc kubenswrapper[4858]: E0218 00:36:31.931431 4858 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 18 00:36:31 crc kubenswrapper[4858]: E0218 00:36:31.931453 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d56a2ef9-2679-43f8-bf70-3b8f1eea8c70-srv-cert podName:d56a2ef9-2679-43f8-bf70-3b8f1eea8c70 nodeName:}" failed. No retries permitted until 2026-02-18 00:36:32.431448263 +0000 UTC m=+145.737284995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d56a2ef9-2679-43f8-bf70-3b8f1eea8c70-srv-cert") pod "catalog-operator-68c6474976-9dqf6" (UID: "d56a2ef9-2679-43f8-bf70-3b8f1eea8c70") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:36:31 crc kubenswrapper[4858]: E0218 00:36:31.931474 4858 secret.go:188] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Feb 18 00:36:31 crc kubenswrapper[4858]: E0218 00:36:31.931513 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afbe6075-e81f-464a-bfb5-7e97510ee945-metrics-certs podName:afbe6075-e81f-464a-bfb5-7e97510ee945 nodeName:}" failed. No retries permitted until 2026-02-18 00:36:32.431487444 +0000 UTC m=+145.737324176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/afbe6075-e81f-464a-bfb5-7e97510ee945-metrics-certs") pod "router-default-5444994796-mxwrb" (UID: "afbe6075-e81f-464a-bfb5-7e97510ee945") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.931759 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-27k9h"] Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.939741 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.962202 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.967896 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng"] Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.981244 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 00:36:31 crc kubenswrapper[4858]: I0218 00:36:31.991645 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4l24x" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:31.999882 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.021727 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.034744 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-lpg4n"] Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.039865 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.059866 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.070141 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-s76q5"] Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.083902 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.099053 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.119580 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.140185 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.142214 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ztstf"] Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.159955 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 00:36:32 crc kubenswrapper[4858]: W0218 00:36:32.169675 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d7e94f0_dd10_424a_8a9f_e3d98854c5ba.slice/crio-fef11b98050f06100b16e13153829a2a37c3547dcc522691d0d0924e37fbc312 WatchSource:0}: Error finding container fef11b98050f06100b16e13153829a2a37c3547dcc522691d0d0924e37fbc312: Status 404 returned error can't find the container with id fef11b98050f06100b16e13153829a2a37c3547dcc522691d0d0924e37fbc312 Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.179290 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.199101 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4l24x"] Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.200599 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.218978 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 00:36:32 crc kubenswrapper[4858]: W0218 00:36:32.223010 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddac5a4df_1236_446c_91c4_1521fb88d2f4.slice/crio-7a0ecab140a4da761e6426ec5ec1c83b5ffc13145ea14d70cc7060387edaa493 WatchSource:0}: Error finding container 7a0ecab140a4da761e6426ec5ec1c83b5ffc13145ea14d70cc7060387edaa493: Status 404 returned error can't find the container with id 7a0ecab140a4da761e6426ec5ec1c83b5ffc13145ea14d70cc7060387edaa493 Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.233561 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-lpg4n" event={"ID":"a82bb6ce-4801-417a-a4e2-93d1667999ee","Type":"ContainerStarted","Data":"b7f1f89d9e0269667a5d99f9d919a6e5d14403ba3a2e94ef02d07c901f1edb0a"} Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.235039 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-s76q5" event={"ID":"0c607470-6245-49d5-9509-009900f0adef","Type":"ContainerStarted","Data":"47772ca993d8307b6cb19e6f1b8de99778c37ea103a2219e288866f322d8712d"} Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.238425 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7nlk"] Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.240274 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.241725 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" event={"ID":"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba","Type":"ContainerStarted","Data":"fef11b98050f06100b16e13153829a2a37c3547dcc522691d0d0924e37fbc312"} Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.243569 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng" event={"ID":"ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383","Type":"ContainerStarted","Data":"2c8e523cb317142556550493c13c60f9236117987a7899458ccfe36014406aed"} Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.243615 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng" event={"ID":"ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383","Type":"ContainerStarted","Data":"deadd615ef7cafa1e15162d1df11e07a6ccd324d07d0d0efdf50c81d7204a6b5"} Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.244443 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vzdrt" event={"ID":"9fcbc6ed-02b2-4c98-b9c6-cb4e70a0fc6b","Type":"ContainerStarted","Data":"771154341f0c967541287e3201ead1d9a71d701caf0d921dfbbb8cdf1e2e1e6b"} Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.245881 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-27k9h" event={"ID":"6269b5c1-c01e-4f81-8c44-94455a9cc858","Type":"ContainerStarted","Data":"c986a52a2fa4e7dcc6f74f79e581c4718a93f139bfc5acaf0d5effefacbe7274"} Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.245907 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-27k9h" event={"ID":"6269b5c1-c01e-4f81-8c44-94455a9cc858","Type":"ContainerStarted","Data":"32f9b83754296266c9890677ac1f40de81e4c48f3d1ca41b34ca0ef1b0357366"} Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.246845 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4l24x" event={"ID":"dac5a4df-1236-446c-91c4-1521fb88d2f4","Type":"ContainerStarted","Data":"7a0ecab140a4da761e6426ec5ec1c83b5ffc13145ea14d70cc7060387edaa493"} Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.260729 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 00:36:32 crc kubenswrapper[4858]: W0218 00:36:32.261470 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1963f275_b68a_4539_9371_8a38bafa03eb.slice/crio-297f5c1e4f911086e741a9bac2d91d928396191598d7a0244f4aaf8abe4aeef8 WatchSource:0}: Error finding container 297f5c1e4f911086e741a9bac2d91d928396191598d7a0244f4aaf8abe4aeef8: Status 404 returned error can't find the container with id 297f5c1e4f911086e741a9bac2d91d928396191598d7a0244f4aaf8abe4aeef8 Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.279067 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.300014 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.319246 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.338993 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.360728 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.379275 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.406866 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.425483 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.439679 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.454961 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5bd9f27-973a-4ec3-91b8-87c2c20c6c34-config-volume\") pod \"collect-profiles-29522910-dtdsw\" (UID: \"a5bd9f27-973a-4ec3-91b8-87c2c20c6c34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-dtdsw" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.455139 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/afbe6075-e81f-464a-bfb5-7e97510ee945-default-certificate\") pod \"router-default-5444994796-mxwrb\" (UID: \"afbe6075-e81f-464a-bfb5-7e97510ee945\") " pod="openshift-ingress/router-default-5444994796-mxwrb" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.455178 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/afbe6075-e81f-464a-bfb5-7e97510ee945-metrics-certs\") pod \"router-default-5444994796-mxwrb\" (UID: \"afbe6075-e81f-464a-bfb5-7e97510ee945\") " pod="openshift-ingress/router-default-5444994796-mxwrb" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.455325 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/afbe6075-e81f-464a-bfb5-7e97510ee945-stats-auth\") pod \"router-default-5444994796-mxwrb\" (UID: \"afbe6075-e81f-464a-bfb5-7e97510ee945\") " pod="openshift-ingress/router-default-5444994796-mxwrb" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.456771 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afbe6075-e81f-464a-bfb5-7e97510ee945-service-ca-bundle\") pod \"router-default-5444994796-mxwrb\" (UID: \"afbe6075-e81f-464a-bfb5-7e97510ee945\") " pod="openshift-ingress/router-default-5444994796-mxwrb" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.456825 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afbe6075-e81f-464a-bfb5-7e97510ee945-service-ca-bundle\") pod \"router-default-5444994796-mxwrb\" (UID: \"afbe6075-e81f-464a-bfb5-7e97510ee945\") " pod="openshift-ingress/router-default-5444994796-mxwrb" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.456887 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d56a2ef9-2679-43f8-bf70-3b8f1eea8c70-srv-cert\") pod \"catalog-operator-68c6474976-9dqf6\" (UID: \"d56a2ef9-2679-43f8-bf70-3b8f1eea8c70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9dqf6" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.459834 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.460916 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/afbe6075-e81f-464a-bfb5-7e97510ee945-default-certificate\") pod \"router-default-5444994796-mxwrb\" (UID: \"afbe6075-e81f-464a-bfb5-7e97510ee945\") " pod="openshift-ingress/router-default-5444994796-mxwrb" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.461942 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/afbe6075-e81f-464a-bfb5-7e97510ee945-stats-auth\") pod \"router-default-5444994796-mxwrb\" (UID: \"afbe6075-e81f-464a-bfb5-7e97510ee945\") " pod="openshift-ingress/router-default-5444994796-mxwrb" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.462750 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d56a2ef9-2679-43f8-bf70-3b8f1eea8c70-srv-cert\") pod \"catalog-operator-68c6474976-9dqf6\" (UID: \"d56a2ef9-2679-43f8-bf70-3b8f1eea8c70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9dqf6" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.464865 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/afbe6075-e81f-464a-bfb5-7e97510ee945-metrics-certs\") pod \"router-default-5444994796-mxwrb\" (UID: \"afbe6075-e81f-464a-bfb5-7e97510ee945\") " pod="openshift-ingress/router-default-5444994796-mxwrb" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.479088 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.486626 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5bd9f27-973a-4ec3-91b8-87c2c20c6c34-config-volume\") pod \"collect-profiles-29522910-dtdsw\" (UID: \"a5bd9f27-973a-4ec3-91b8-87c2c20c6c34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-dtdsw" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.500201 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.519774 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.539521 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.560811 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.580154 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.600203 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.612802 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.619686 4858 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.659544 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.679293 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.700313 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.719181 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.737778 4858 request.go:700] Waited for 1.933912991s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&limit=500&resourceVersion=0 Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.740065 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.759952 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.779257 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.817137 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvvr8\" (UniqueName: \"kubernetes.io/projected/849a4228-f4ae-4b7f-a2c8-5db413e4dd28-kube-api-access-qvvr8\") pod \"dns-operator-744455d44c-kqbdg\" (UID: \"849a4228-f4ae-4b7f-a2c8-5db413e4dd28\") " pod="openshift-dns-operator/dns-operator-744455d44c-kqbdg" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.840968 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqllt\" (UniqueName: \"kubernetes.io/projected/5977586b-6538-4050-bfde-dde62e4d87cd-kube-api-access-hqllt\") pod \"image-pruner-29522880-n4bvp\" (UID: \"5977586b-6538-4050-bfde-dde62e4d87cd\") " pod="openshift-image-registry/image-pruner-29522880-n4bvp" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.858571 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4zs9\" (UniqueName: \"kubernetes.io/projected/896376a1-7809-4597-a315-2089547c2f89-kube-api-access-t4zs9\") pod \"machine-approver-56656f9798-kzbz5\" (UID: \"896376a1-7809-4597-a315-2089547c2f89\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kzbz5" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.885100 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cp54\" (UniqueName: \"kubernetes.io/projected/93a92f34-d9a8-4276-8a97-3f129c4db452-kube-api-access-2cp54\") pod \"control-plane-machine-set-operator-78cbb6b69f-574cj\" (UID: \"93a92f34-d9a8-4276-8a97-3f129c4db452\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-574cj" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.904018 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f95d7fd9-797f-464f-ac5e-e78c353e78ee-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wnh9x\" (UID: \"f95d7fd9-797f-464f-ac5e-e78c353e78ee\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wnh9x" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.905909 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-kqbdg" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.910969 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-574cj" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.914140 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2798c53f-d277-411d-b95d-3439db650d71-bound-sa-token\") pod \"ingress-operator-5b745b69d9-n879l\" (UID: \"2798c53f-d277-411d-b95d-3439db650d71\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n879l" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.936849 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqpcl\" (UniqueName: \"kubernetes.io/projected/44c5ceae-0c80-4b01-a773-8c222c900f34-kube-api-access-zqpcl\") pod \"kube-storage-version-migrator-operator-b67b599dd-zk6hd\" (UID: \"44c5ceae-0c80-4b01-a773-8c222c900f34\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zk6hd" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.959813 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9bn8\" (UniqueName: \"kubernetes.io/projected/b0bd9345-840f-40e8-946d-b646e19a6b39-kube-api-access-s9bn8\") pod \"migrator-59844c95c7-c7bmt\" (UID: \"b0bd9345-840f-40e8-946d-b646e19a6b39\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c7bmt" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.976915 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njhvx\" (UniqueName: \"kubernetes.io/projected/92e95ff1-a825-4d17-825f-f4765353a5f2-kube-api-access-njhvx\") pod \"oauth-openshift-558db77b4-cjd57\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:32 crc kubenswrapper[4858]: I0218 00:36:32.992618 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:32.999270 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29522880-n4bvp" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:32.999611 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krmrf\" (UniqueName: \"kubernetes.io/projected/eef415f0-0fe2-4c5c-a528-3394ce644ff1-kube-api-access-krmrf\") pod \"machine-config-operator-74547568cd-54bhc\" (UID: \"eef415f0-0fe2-4c5c-a528-3394ce644ff1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-54bhc" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.011025 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kzbz5" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.019138 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c7bmt" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.025460 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wnh9x" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.038292 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12b8a5f7-869c-4343-8224-ae76d73073cf-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8drdx\" (UID: \"12b8a5f7-869c-4343-8224-ae76d73073cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8drdx" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.045245 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r75br\" (UniqueName: \"kubernetes.io/projected/a4447453-79a5-4008-89ec-add924803b82-kube-api-access-r75br\") pod \"apiserver-7bbb656c7d-6rx4q\" (UID: \"a4447453-79a5-4008-89ec-add924803b82\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.048547 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-54bhc" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.055341 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqdgp\" (UniqueName: \"kubernetes.io/projected/b132ff06-2a28-42df-b43b-f923a76b4cca-kube-api-access-lqdgp\") pod \"downloads-7954f5f757-6djsl\" (UID: \"b132ff06-2a28-42df-b43b-f923a76b4cca\") " pod="openshift-console/downloads-7954f5f757-6djsl" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.076783 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6djsl" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.084113 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mftm7\" (UniqueName: \"kubernetes.io/projected/646ba69d-8375-436b-a16f-e7bae5475ac6-kube-api-access-mftm7\") pod \"console-operator-58897d9998-gtw2k\" (UID: \"646ba69d-8375-436b-a16f-e7bae5475ac6\") " pod="openshift-console-operator/console-operator-58897d9998-gtw2k" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.093780 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6mc7\" (UniqueName: \"kubernetes.io/projected/08ad236d-4644-4e49-b9d5-194b2746a760-kube-api-access-q6mc7\") pod \"openshift-controller-manager-operator-756b6f6bc6-mkz5l\" (UID: \"08ad236d-4644-4e49-b9d5-194b2746a760\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mkz5l" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.118196 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xskds\" (UniqueName: \"kubernetes.io/projected/2798c53f-d277-411d-b95d-3439db650d71-kube-api-access-xskds\") pod \"ingress-operator-5b745b69d9-n879l\" (UID: \"2798c53f-d277-411d-b95d-3439db650d71\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n879l" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.144625 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcdnr\" (UniqueName: \"kubernetes.io/projected/c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c-kube-api-access-wcdnr\") pod \"etcd-operator-b45778765-9jdl7\" (UID: \"c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9jdl7" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.161079 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4mtt\" (UniqueName: \"kubernetes.io/projected/a3cba07a-2fd4-4794-bae6-53b73a54905a-kube-api-access-j4mtt\") pod \"multus-admission-controller-857f4d67dd-n2kdn\" (UID: \"a3cba07a-2fd4-4794-bae6-53b73a54905a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n2kdn" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.183472 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-kqbdg"] Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.184333 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qb44\" (UniqueName: \"kubernetes.io/projected/afbe6075-e81f-464a-bfb5-7e97510ee945-kube-api-access-5qb44\") pod \"router-default-5444994796-mxwrb\" (UID: \"afbe6075-e81f-464a-bfb5-7e97510ee945\") " pod="openshift-ingress/router-default-5444994796-mxwrb" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.198705 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pdvw\" (UniqueName: \"kubernetes.io/projected/f8506161-354f-42a0-8f15-9c02ba3fe215-kube-api-access-4pdvw\") pod \"olm-operator-6b444d44fb-pjmlv\" (UID: \"f8506161-354f-42a0-8f15-9c02ba3fe215\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjmlv" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.200380 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mkz5l" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.206956 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-574cj"] Feb 18 00:36:33 crc kubenswrapper[4858]: W0218 00:36:33.214731 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod849a4228_f4ae_4b7f_a2c8_5db413e4dd28.slice/crio-c15e3069639afc9df28109eade5fd94decf43bd2bc8821e89d0f2337311c4059 WatchSource:0}: Error finding container c15e3069639afc9df28109eade5fd94decf43bd2bc8821e89d0f2337311c4059: Status 404 returned error can't find the container with id c15e3069639afc9df28109eade5fd94decf43bd2bc8821e89d0f2337311c4059 Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.216999 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bljcd\" (UniqueName: \"kubernetes.io/projected/64aaf596-bd11-435d-97ae-0c02f0f93c9f-kube-api-access-bljcd\") pod \"apiserver-76f77b778f-bc7mz\" (UID: \"64aaf596-bd11-435d-97ae-0c02f0f93c9f\") " pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.220160 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.228223 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zk6hd" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.232609 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stnrw\" (UniqueName: \"kubernetes.io/projected/d56a2ef9-2679-43f8-bf70-3b8f1eea8c70-kube-api-access-stnrw\") pod \"catalog-operator-68c6474976-9dqf6\" (UID: \"d56a2ef9-2679-43f8-bf70-3b8f1eea8c70\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9dqf6" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.253803 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5496t\" (UniqueName: \"kubernetes.io/projected/a5bd9f27-973a-4ec3-91b8-87c2c20c6c34-kube-api-access-5496t\") pod \"collect-profiles-29522910-dtdsw\" (UID: \"a5bd9f27-973a-4ec3-91b8-87c2c20c6c34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-dtdsw" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.263287 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4l24x" event={"ID":"dac5a4df-1236-446c-91c4-1521fb88d2f4","Type":"ContainerStarted","Data":"412e46c276e24d9ef41b38916b6e9f56f3359e3afbe6945633e000762f690af3"} Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.264727 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-s76q5" event={"ID":"0c607470-6245-49d5-9509-009900f0adef","Type":"ContainerStarted","Data":"edc560213c9426e6db0e4b39c620629a7f2ff87e5111e8f052a2f30a72428027"} Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.265688 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-kqbdg" event={"ID":"849a4228-f4ae-4b7f-a2c8-5db413e4dd28","Type":"ContainerStarted","Data":"c15e3069639afc9df28109eade5fd94decf43bd2bc8821e89d0f2337311c4059"} Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.269505 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-lpg4n" event={"ID":"a82bb6ce-4801-417a-a4e2-93d1667999ee","Type":"ContainerStarted","Data":"e1db85881b9dde3bc9c60fed94b1a62a689493603329b376124848c707df4108"} Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.272220 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gtw2k" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.272425 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" event={"ID":"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba","Type":"ContainerStarted","Data":"43cc232211a808de4ccadb4be357271e9ce5b36aa4013d7c83421c138e02db43"} Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.272730 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.273441 4858 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-ztstf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.273472 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" podUID="5d7e94f0-dd10-424a-8a9f-e3d98854c5ba" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.274192 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7nlk" event={"ID":"1963f275-b68a-4539-9371-8a38bafa03eb","Type":"ContainerStarted","Data":"95c7da236cedf97e9681bb5a3b56dcf9bc45ef72a728e23b95da24e546c3afe6"} Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.274249 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7nlk" event={"ID":"1963f275-b68a-4539-9371-8a38bafa03eb","Type":"ContainerStarted","Data":"297f5c1e4f911086e741a9bac2d91d928396191598d7a0244f4aaf8abe4aeef8"} Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.276930 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n879l" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.277707 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vzdrt" event={"ID":"9fcbc6ed-02b2-4c98-b9c6-cb4e70a0fc6b","Type":"ContainerStarted","Data":"c9f1e24c98dba177bd79501d9896a4773c8397b44ca6261a4658321720ccc697"} Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.277726 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vzdrt" event={"ID":"9fcbc6ed-02b2-4c98-b9c6-cb4e70a0fc6b","Type":"ContainerStarted","Data":"fb0d8f3a6028de94acc3cf3a59dcd97e99e4fcec3a72d7972660d41a57ceafb6"} Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.281301 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kzbz5" event={"ID":"896376a1-7809-4597-a315-2089547c2f89","Type":"ContainerStarted","Data":"d6dc11607e8ec5bb471c1efbfd9fb777091bcf4c607feed9b586ff583321c0fc"} Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.282765 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8drdx" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.287631 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-27k9h" event={"ID":"6269b5c1-c01e-4f81-8c44-94455a9cc858","Type":"ContainerStarted","Data":"9be9c216b6a83f05dddd8139fef491e1745faeaf2eba2ec49106e6f7eb6d372a"} Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.289311 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.306841 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-9jdl7" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.369928 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.371190 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s7cl\" (UniqueName: \"kubernetes.io/projected/79427318-6288-4dd5-8209-dae415c0dab4-kube-api-access-7s7cl\") pod \"machine-config-controller-84d6567774-wswhj\" (UID: \"79427318-6288-4dd5-8209-dae415c0dab4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wswhj" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.371225 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/96fed31a-2574-4ee1-9781-f4cfd1f9c68b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p8987\" (UID: \"96fed31a-2574-4ee1-9781-f4cfd1f9c68b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8987" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.371315 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/329e20a2-8966-48c0-8300-bc996770880d-bound-sa-token\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.371333 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96fed31a-2574-4ee1-9781-f4cfd1f9c68b-serving-cert\") pod \"openshift-config-operator-7777fb866f-p8987\" (UID: \"96fed31a-2574-4ee1-9781-f4cfd1f9c68b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8987" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.371356 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7cc6c0de-0fa4-4366-b66d-7e8753c27f9f-mountpoint-dir\") pod \"csi-hostpathplugin-nmn6s\" (UID: \"7cc6c0de-0fa4-4366-b66d-7e8753c27f9f\") " pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.371374 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfc20a1c-6687-4ddd-baad-b18790cae2f9-config\") pod \"service-ca-operator-777779d784-kp8d2\" (UID: \"bfc20a1c-6687-4ddd-baad-b18790cae2f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kp8d2" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.371428 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/706d9c75-e27c-4596-80d8-68bf71015ca0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-smdtb\" (UID: \"706d9c75-e27c-4596-80d8-68bf71015ca0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-smdtb" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.371444 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7cc6c0de-0fa4-4366-b66d-7e8753c27f9f-plugins-dir\") pod \"csi-hostpathplugin-nmn6s\" (UID: \"7cc6c0de-0fa4-4366-b66d-7e8753c27f9f\") " pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.371478 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/329e20a2-8966-48c0-8300-bc996770880d-registry-tls\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.371518 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73fd9054-c7ef-49ad-b80e-db70402b6af2-config\") pod \"kube-apiserver-operator-766d6c64bb-7wmw2\" (UID: \"73fd9054-c7ef-49ad-b80e-db70402b6af2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wmw2" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.371533 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3e83b774-3784-4b56-b452-a3a04fc9929f-signing-key\") pod \"service-ca-9c57cc56f-7l2r7\" (UID: \"3e83b774-3784-4b56-b452-a3a04fc9929f\") " pod="openshift-service-ca/service-ca-9c57cc56f-7l2r7" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.371563 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/693a6651-227a-4a62-85df-4a7e667c3daf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mfftk\" (UID: \"693a6651-227a-4a62-85df-4a7e667c3daf\") " pod="openshift-marketplace/marketplace-operator-79b997595-mfftk" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.371579 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d21b0b26-1895-45e4-bf96-1efab1f33644-webhook-cert\") pod \"packageserver-d55dfcdfc-b772q\" (UID: \"d21b0b26-1895-45e4-bf96-1efab1f33644\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b772q" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.371594 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7cc6c0de-0fa4-4366-b66d-7e8753c27f9f-socket-dir\") pod \"csi-hostpathplugin-nmn6s\" (UID: \"7cc6c0de-0fa4-4366-b66d-7e8753c27f9f\") " pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.371609 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d21b0b26-1895-45e4-bf96-1efab1f33644-tmpfs\") pod \"packageserver-d55dfcdfc-b772q\" (UID: \"d21b0b26-1895-45e4-bf96-1efab1f33644\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b772q" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.371635 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73fd9054-c7ef-49ad-b80e-db70402b6af2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7wmw2\" (UID: \"73fd9054-c7ef-49ad-b80e-db70402b6af2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wmw2" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.371667 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cnvd\" (UniqueName: \"kubernetes.io/projected/693a6651-227a-4a62-85df-4a7e667c3daf-kube-api-access-2cnvd\") pod \"marketplace-operator-79b997595-mfftk\" (UID: \"693a6651-227a-4a62-85df-4a7e667c3daf\") " pod="openshift-marketplace/marketplace-operator-79b997595-mfftk" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.371693 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/79427318-6288-4dd5-8209-dae415c0dab4-proxy-tls\") pod \"machine-config-controller-84d6567774-wswhj\" (UID: \"79427318-6288-4dd5-8209-dae415c0dab4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wswhj" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.371740 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpvb2\" (UniqueName: \"kubernetes.io/projected/329e20a2-8966-48c0-8300-bc996770880d-kube-api-access-vpvb2\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.371767 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7cc6c0de-0fa4-4366-b66d-7e8753c27f9f-csi-data-dir\") pod \"csi-hostpathplugin-nmn6s\" (UID: \"7cc6c0de-0fa4-4366-b66d-7e8753c27f9f\") " pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.371800 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/329e20a2-8966-48c0-8300-bc996770880d-trusted-ca\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.371821 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.371872 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvr4g\" (UniqueName: \"kubernetes.io/projected/7cc6c0de-0fa4-4366-b66d-7e8753c27f9f-kube-api-access-jvr4g\") pod \"csi-hostpathplugin-nmn6s\" (UID: \"7cc6c0de-0fa4-4366-b66d-7e8753c27f9f\") " pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.371902 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/79427318-6288-4dd5-8209-dae415c0dab4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wswhj\" (UID: \"79427318-6288-4dd5-8209-dae415c0dab4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wswhj" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.372001 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d21b0b26-1895-45e4-bf96-1efab1f33644-apiservice-cert\") pod \"packageserver-d55dfcdfc-b772q\" (UID: \"d21b0b26-1895-45e4-bf96-1efab1f33644\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b772q" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.372045 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7cc6c0de-0fa4-4366-b66d-7e8753c27f9f-registration-dir\") pod \"csi-hostpathplugin-nmn6s\" (UID: \"7cc6c0de-0fa4-4366-b66d-7e8753c27f9f\") " pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.372063 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzfcd\" (UniqueName: \"kubernetes.io/projected/d21b0b26-1895-45e4-bf96-1efab1f33644-kube-api-access-nzfcd\") pod \"packageserver-d55dfcdfc-b772q\" (UID: \"d21b0b26-1895-45e4-bf96-1efab1f33644\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b772q" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.372090 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/329e20a2-8966-48c0-8300-bc996770880d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.372105 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3e83b774-3784-4b56-b452-a3a04fc9929f-signing-cabundle\") pod \"service-ca-9c57cc56f-7l2r7\" (UID: \"3e83b774-3784-4b56-b452-a3a04fc9929f\") " pod="openshift-service-ca/service-ca-9c57cc56f-7l2r7" Feb 18 00:36:33 crc kubenswrapper[4858]: E0218 00:36:33.374462 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:33.874448272 +0000 UTC m=+147.180285004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.378076 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c8rl\" (UniqueName: \"kubernetes.io/projected/96fed31a-2574-4ee1-9781-f4cfd1f9c68b-kube-api-access-8c8rl\") pod \"openshift-config-operator-7777fb866f-p8987\" (UID: \"96fed31a-2574-4ee1-9781-f4cfd1f9c68b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8987" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.378115 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfc20a1c-6687-4ddd-baad-b18790cae2f9-serving-cert\") pod \"service-ca-operator-777779d784-kp8d2\" (UID: \"bfc20a1c-6687-4ddd-baad-b18790cae2f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kp8d2" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.378149 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qfml\" (UniqueName: \"kubernetes.io/projected/706d9c75-e27c-4596-80d8-68bf71015ca0-kube-api-access-2qfml\") pod \"package-server-manager-789f6589d5-smdtb\" (UID: \"706d9c75-e27c-4596-80d8-68bf71015ca0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-smdtb" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.378189 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7s6n\" (UniqueName: \"kubernetes.io/projected/3e83b774-3784-4b56-b452-a3a04fc9929f-kube-api-access-v7s6n\") pod \"service-ca-9c57cc56f-7l2r7\" (UID: \"3e83b774-3784-4b56-b452-a3a04fc9929f\") " pod="openshift-service-ca/service-ca-9c57cc56f-7l2r7" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.378227 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f80a41ed-22eb-4af8-8374-d22019caf19e-certs\") pod \"machine-config-server-g72m8\" (UID: \"f80a41ed-22eb-4af8-8374-d22019caf19e\") " pod="openshift-machine-config-operator/machine-config-server-g72m8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.378300 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f80a41ed-22eb-4af8-8374-d22019caf19e-node-bootstrap-token\") pod \"machine-config-server-g72m8\" (UID: \"f80a41ed-22eb-4af8-8374-d22019caf19e\") " pod="openshift-machine-config-operator/machine-config-server-g72m8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.378317 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/693a6651-227a-4a62-85df-4a7e667c3daf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mfftk\" (UID: \"693a6651-227a-4a62-85df-4a7e667c3daf\") " pod="openshift-marketplace/marketplace-operator-79b997595-mfftk" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.378336 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/329e20a2-8966-48c0-8300-bc996770880d-registry-certificates\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.378366 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73fd9054-c7ef-49ad-b80e-db70402b6af2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7wmw2\" (UID: \"73fd9054-c7ef-49ad-b80e-db70402b6af2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wmw2" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.378385 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvvbg\" (UniqueName: \"kubernetes.io/projected/f80a41ed-22eb-4af8-8374-d22019caf19e-kube-api-access-rvvbg\") pod \"machine-config-server-g72m8\" (UID: \"f80a41ed-22eb-4af8-8374-d22019caf19e\") " pod="openshift-machine-config-operator/machine-config-server-g72m8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.378415 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s64mx\" (UniqueName: \"kubernetes.io/projected/bfc20a1c-6687-4ddd-baad-b18790cae2f9-kube-api-access-s64mx\") pod \"service-ca-operator-777779d784-kp8d2\" (UID: \"bfc20a1c-6687-4ddd-baad-b18790cae2f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kp8d2" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.378440 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/329e20a2-8966-48c0-8300-bc996770880d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.390470 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjmlv" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.396068 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-n2kdn" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.413760 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9dqf6" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.421588 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-mxwrb" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.452458 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.478089 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-dtdsw" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.479825 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.480062 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/96fed31a-2574-4ee1-9781-f4cfd1f9c68b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p8987\" (UID: \"96fed31a-2574-4ee1-9781-f4cfd1f9c68b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8987" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.480219 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzpsw\" (UniqueName: \"kubernetes.io/projected/d7046c26-d46d-419b-817d-a675e207d07c-kube-api-access-zzpsw\") pod \"dns-default-jnlh5\" (UID: \"d7046c26-d46d-419b-817d-a675e207d07c\") " pod="openshift-dns/dns-default-jnlh5" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.480259 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/329e20a2-8966-48c0-8300-bc996770880d-bound-sa-token\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.480278 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96fed31a-2574-4ee1-9781-f4cfd1f9c68b-serving-cert\") pod \"openshift-config-operator-7777fb866f-p8987\" (UID: \"96fed31a-2574-4ee1-9781-f4cfd1f9c68b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8987" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.480321 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7cc6c0de-0fa4-4366-b66d-7e8753c27f9f-mountpoint-dir\") pod \"csi-hostpathplugin-nmn6s\" (UID: \"7cc6c0de-0fa4-4366-b66d-7e8753c27f9f\") " pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.480342 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfc20a1c-6687-4ddd-baad-b18790cae2f9-config\") pod \"service-ca-operator-777779d784-kp8d2\" (UID: \"bfc20a1c-6687-4ddd-baad-b18790cae2f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kp8d2" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.480426 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/706d9c75-e27c-4596-80d8-68bf71015ca0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-smdtb\" (UID: \"706d9c75-e27c-4596-80d8-68bf71015ca0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-smdtb" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.480461 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7cc6c0de-0fa4-4366-b66d-7e8753c27f9f-plugins-dir\") pod \"csi-hostpathplugin-nmn6s\" (UID: \"7cc6c0de-0fa4-4366-b66d-7e8753c27f9f\") " pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.480556 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/329e20a2-8966-48c0-8300-bc996770880d-registry-tls\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.480602 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73fd9054-c7ef-49ad-b80e-db70402b6af2-config\") pod \"kube-apiserver-operator-766d6c64bb-7wmw2\" (UID: \"73fd9054-c7ef-49ad-b80e-db70402b6af2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wmw2" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.480624 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3e83b774-3784-4b56-b452-a3a04fc9929f-signing-key\") pod \"service-ca-9c57cc56f-7l2r7\" (UID: \"3e83b774-3784-4b56-b452-a3a04fc9929f\") " pod="openshift-service-ca/service-ca-9c57cc56f-7l2r7" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.480666 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/693a6651-227a-4a62-85df-4a7e667c3daf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mfftk\" (UID: \"693a6651-227a-4a62-85df-4a7e667c3daf\") " pod="openshift-marketplace/marketplace-operator-79b997595-mfftk" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.480687 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d21b0b26-1895-45e4-bf96-1efab1f33644-webhook-cert\") pod \"packageserver-d55dfcdfc-b772q\" (UID: \"d21b0b26-1895-45e4-bf96-1efab1f33644\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b772q" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.480706 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7cc6c0de-0fa4-4366-b66d-7e8753c27f9f-socket-dir\") pod \"csi-hostpathplugin-nmn6s\" (UID: \"7cc6c0de-0fa4-4366-b66d-7e8753c27f9f\") " pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.480724 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d21b0b26-1895-45e4-bf96-1efab1f33644-tmpfs\") pod \"packageserver-d55dfcdfc-b772q\" (UID: \"d21b0b26-1895-45e4-bf96-1efab1f33644\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b772q" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.480782 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73fd9054-c7ef-49ad-b80e-db70402b6af2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7wmw2\" (UID: \"73fd9054-c7ef-49ad-b80e-db70402b6af2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wmw2" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.480818 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7046c26-d46d-419b-817d-a675e207d07c-config-volume\") pod \"dns-default-jnlh5\" (UID: \"d7046c26-d46d-419b-817d-a675e207d07c\") " pod="openshift-dns/dns-default-jnlh5" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.480842 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cnvd\" (UniqueName: \"kubernetes.io/projected/693a6651-227a-4a62-85df-4a7e667c3daf-kube-api-access-2cnvd\") pod \"marketplace-operator-79b997595-mfftk\" (UID: \"693a6651-227a-4a62-85df-4a7e667c3daf\") " pod="openshift-marketplace/marketplace-operator-79b997595-mfftk" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.480863 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/79427318-6288-4dd5-8209-dae415c0dab4-proxy-tls\") pod \"machine-config-controller-84d6567774-wswhj\" (UID: \"79427318-6288-4dd5-8209-dae415c0dab4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wswhj" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.480920 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpvb2\" (UniqueName: \"kubernetes.io/projected/329e20a2-8966-48c0-8300-bc996770880d-kube-api-access-vpvb2\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.480966 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7cc6c0de-0fa4-4366-b66d-7e8753c27f9f-csi-data-dir\") pod \"csi-hostpathplugin-nmn6s\" (UID: \"7cc6c0de-0fa4-4366-b66d-7e8753c27f9f\") " pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.480988 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h6r5\" (UniqueName: \"kubernetes.io/projected/2bbea11f-6abd-4472-af4f-2b838e9ad97e-kube-api-access-6h6r5\") pod \"ingress-canary-26v6w\" (UID: \"2bbea11f-6abd-4472-af4f-2b838e9ad97e\") " pod="openshift-ingress-canary/ingress-canary-26v6w" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.481013 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/329e20a2-8966-48c0-8300-bc996770880d-trusted-ca\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.481079 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvr4g\" (UniqueName: \"kubernetes.io/projected/7cc6c0de-0fa4-4366-b66d-7e8753c27f9f-kube-api-access-jvr4g\") pod \"csi-hostpathplugin-nmn6s\" (UID: \"7cc6c0de-0fa4-4366-b66d-7e8753c27f9f\") " pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.481103 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/79427318-6288-4dd5-8209-dae415c0dab4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wswhj\" (UID: \"79427318-6288-4dd5-8209-dae415c0dab4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wswhj" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.481188 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d21b0b26-1895-45e4-bf96-1efab1f33644-apiservice-cert\") pod \"packageserver-d55dfcdfc-b772q\" (UID: \"d21b0b26-1895-45e4-bf96-1efab1f33644\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b772q" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.481202 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzfcd\" (UniqueName: \"kubernetes.io/projected/d21b0b26-1895-45e4-bf96-1efab1f33644-kube-api-access-nzfcd\") pod \"packageserver-d55dfcdfc-b772q\" (UID: \"d21b0b26-1895-45e4-bf96-1efab1f33644\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b772q" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.481235 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7cc6c0de-0fa4-4366-b66d-7e8753c27f9f-registration-dir\") pod \"csi-hostpathplugin-nmn6s\" (UID: \"7cc6c0de-0fa4-4366-b66d-7e8753c27f9f\") " pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.481252 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/329e20a2-8966-48c0-8300-bc996770880d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.481267 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3e83b774-3784-4b56-b452-a3a04fc9929f-signing-cabundle\") pod \"service-ca-9c57cc56f-7l2r7\" (UID: \"3e83b774-3784-4b56-b452-a3a04fc9929f\") " pod="openshift-service-ca/service-ca-9c57cc56f-7l2r7" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.481311 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c8rl\" (UniqueName: \"kubernetes.io/projected/96fed31a-2574-4ee1-9781-f4cfd1f9c68b-kube-api-access-8c8rl\") pod \"openshift-config-operator-7777fb866f-p8987\" (UID: \"96fed31a-2574-4ee1-9781-f4cfd1f9c68b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8987" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.481326 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qfml\" (UniqueName: \"kubernetes.io/projected/706d9c75-e27c-4596-80d8-68bf71015ca0-kube-api-access-2qfml\") pod \"package-server-manager-789f6589d5-smdtb\" (UID: \"706d9c75-e27c-4596-80d8-68bf71015ca0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-smdtb" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.481341 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfc20a1c-6687-4ddd-baad-b18790cae2f9-serving-cert\") pod \"service-ca-operator-777779d784-kp8d2\" (UID: \"bfc20a1c-6687-4ddd-baad-b18790cae2f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kp8d2" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.481417 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7s6n\" (UniqueName: \"kubernetes.io/projected/3e83b774-3784-4b56-b452-a3a04fc9929f-kube-api-access-v7s6n\") pod \"service-ca-9c57cc56f-7l2r7\" (UID: \"3e83b774-3784-4b56-b452-a3a04fc9929f\") " pod="openshift-service-ca/service-ca-9c57cc56f-7l2r7" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.481435 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d7046c26-d46d-419b-817d-a675e207d07c-metrics-tls\") pod \"dns-default-jnlh5\" (UID: \"d7046c26-d46d-419b-817d-a675e207d07c\") " pod="openshift-dns/dns-default-jnlh5" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.481467 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2bbea11f-6abd-4472-af4f-2b838e9ad97e-cert\") pod \"ingress-canary-26v6w\" (UID: \"2bbea11f-6abd-4472-af4f-2b838e9ad97e\") " pod="openshift-ingress-canary/ingress-canary-26v6w" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.481584 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f80a41ed-22eb-4af8-8374-d22019caf19e-node-bootstrap-token\") pod \"machine-config-server-g72m8\" (UID: \"f80a41ed-22eb-4af8-8374-d22019caf19e\") " pod="openshift-machine-config-operator/machine-config-server-g72m8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.481602 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f80a41ed-22eb-4af8-8374-d22019caf19e-certs\") pod \"machine-config-server-g72m8\" (UID: \"f80a41ed-22eb-4af8-8374-d22019caf19e\") " pod="openshift-machine-config-operator/machine-config-server-g72m8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.481648 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/693a6651-227a-4a62-85df-4a7e667c3daf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mfftk\" (UID: \"693a6651-227a-4a62-85df-4a7e667c3daf\") " pod="openshift-marketplace/marketplace-operator-79b997595-mfftk" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.481682 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/329e20a2-8966-48c0-8300-bc996770880d-registry-certificates\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.481727 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73fd9054-c7ef-49ad-b80e-db70402b6af2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7wmw2\" (UID: \"73fd9054-c7ef-49ad-b80e-db70402b6af2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wmw2" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.481743 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s64mx\" (UniqueName: \"kubernetes.io/projected/bfc20a1c-6687-4ddd-baad-b18790cae2f9-kube-api-access-s64mx\") pod \"service-ca-operator-777779d784-kp8d2\" (UID: \"bfc20a1c-6687-4ddd-baad-b18790cae2f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kp8d2" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.481759 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvvbg\" (UniqueName: \"kubernetes.io/projected/f80a41ed-22eb-4af8-8374-d22019caf19e-kube-api-access-rvvbg\") pod \"machine-config-server-g72m8\" (UID: \"f80a41ed-22eb-4af8-8374-d22019caf19e\") " pod="openshift-machine-config-operator/machine-config-server-g72m8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.481802 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/329e20a2-8966-48c0-8300-bc996770880d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.481845 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s7cl\" (UniqueName: \"kubernetes.io/projected/79427318-6288-4dd5-8209-dae415c0dab4-kube-api-access-7s7cl\") pod \"machine-config-controller-84d6567774-wswhj\" (UID: \"79427318-6288-4dd5-8209-dae415c0dab4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wswhj" Feb 18 00:36:33 crc kubenswrapper[4858]: E0218 00:36:33.482247 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:33.982232085 +0000 UTC m=+147.288068817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.483522 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/96fed31a-2574-4ee1-9781-f4cfd1f9c68b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p8987\" (UID: \"96fed31a-2574-4ee1-9781-f4cfd1f9c68b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8987" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.485695 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/79427318-6288-4dd5-8209-dae415c0dab4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wswhj\" (UID: \"79427318-6288-4dd5-8209-dae415c0dab4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wswhj" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.497016 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7cc6c0de-0fa4-4366-b66d-7e8753c27f9f-registration-dir\") pod \"csi-hostpathplugin-nmn6s\" (UID: \"7cc6c0de-0fa4-4366-b66d-7e8753c27f9f\") " pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.497410 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/329e20a2-8966-48c0-8300-bc996770880d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.497572 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/693a6651-227a-4a62-85df-4a7e667c3daf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mfftk\" (UID: \"693a6651-227a-4a62-85df-4a7e667c3daf\") " pod="openshift-marketplace/marketplace-operator-79b997595-mfftk" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.498005 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3e83b774-3784-4b56-b452-a3a04fc9929f-signing-cabundle\") pod \"service-ca-9c57cc56f-7l2r7\" (UID: \"3e83b774-3784-4b56-b452-a3a04fc9929f\") " pod="openshift-service-ca/service-ca-9c57cc56f-7l2r7" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.503746 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7cc6c0de-0fa4-4366-b66d-7e8753c27f9f-socket-dir\") pod \"csi-hostpathplugin-nmn6s\" (UID: \"7cc6c0de-0fa4-4366-b66d-7e8753c27f9f\") " pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.504825 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7cc6c0de-0fa4-4366-b66d-7e8753c27f9f-csi-data-dir\") pod \"csi-hostpathplugin-nmn6s\" (UID: \"7cc6c0de-0fa4-4366-b66d-7e8753c27f9f\") " pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.505803 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/329e20a2-8966-48c0-8300-bc996770880d-registry-certificates\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.505941 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73fd9054-c7ef-49ad-b80e-db70402b6af2-config\") pod \"kube-apiserver-operator-766d6c64bb-7wmw2\" (UID: \"73fd9054-c7ef-49ad-b80e-db70402b6af2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wmw2" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.506800 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73fd9054-c7ef-49ad-b80e-db70402b6af2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7wmw2\" (UID: \"73fd9054-c7ef-49ad-b80e-db70402b6af2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wmw2" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.507911 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/329e20a2-8966-48c0-8300-bc996770880d-trusted-ca\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.508790 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d21b0b26-1895-45e4-bf96-1efab1f33644-tmpfs\") pod \"packageserver-d55dfcdfc-b772q\" (UID: \"d21b0b26-1895-45e4-bf96-1efab1f33644\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b772q" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.509149 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfc20a1c-6687-4ddd-baad-b18790cae2f9-config\") pod \"service-ca-operator-777779d784-kp8d2\" (UID: \"bfc20a1c-6687-4ddd-baad-b18790cae2f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kp8d2" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.509897 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7cc6c0de-0fa4-4366-b66d-7e8753c27f9f-plugins-dir\") pod \"csi-hostpathplugin-nmn6s\" (UID: \"7cc6c0de-0fa4-4366-b66d-7e8753c27f9f\") " pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.511164 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7cc6c0de-0fa4-4366-b66d-7e8753c27f9f-mountpoint-dir\") pod \"csi-hostpathplugin-nmn6s\" (UID: \"7cc6c0de-0fa4-4366-b66d-7e8753c27f9f\") " pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.529207 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/329e20a2-8966-48c0-8300-bc996770880d-registry-tls\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.529715 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f80a41ed-22eb-4af8-8374-d22019caf19e-certs\") pod \"machine-config-server-g72m8\" (UID: \"f80a41ed-22eb-4af8-8374-d22019caf19e\") " pod="openshift-machine-config-operator/machine-config-server-g72m8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.533593 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d21b0b26-1895-45e4-bf96-1efab1f33644-apiservice-cert\") pod \"packageserver-d55dfcdfc-b772q\" (UID: \"d21b0b26-1895-45e4-bf96-1efab1f33644\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b772q" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.533995 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96fed31a-2574-4ee1-9781-f4cfd1f9c68b-serving-cert\") pod \"openshift-config-operator-7777fb866f-p8987\" (UID: \"96fed31a-2574-4ee1-9781-f4cfd1f9c68b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8987" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.539676 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfc20a1c-6687-4ddd-baad-b18790cae2f9-serving-cert\") pod \"service-ca-operator-777779d784-kp8d2\" (UID: \"bfc20a1c-6687-4ddd-baad-b18790cae2f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kp8d2" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.552426 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3e83b774-3784-4b56-b452-a3a04fc9929f-signing-key\") pod \"service-ca-9c57cc56f-7l2r7\" (UID: \"3e83b774-3784-4b56-b452-a3a04fc9929f\") " pod="openshift-service-ca/service-ca-9c57cc56f-7l2r7" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.560105 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/706d9c75-e27c-4596-80d8-68bf71015ca0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-smdtb\" (UID: \"706d9c75-e27c-4596-80d8-68bf71015ca0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-smdtb" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.567987 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/79427318-6288-4dd5-8209-dae415c0dab4-proxy-tls\") pod \"machine-config-controller-84d6567774-wswhj\" (UID: \"79427318-6288-4dd5-8209-dae415c0dab4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wswhj" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.568027 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f80a41ed-22eb-4af8-8374-d22019caf19e-node-bootstrap-token\") pod \"machine-config-server-g72m8\" (UID: \"f80a41ed-22eb-4af8-8374-d22019caf19e\") " pod="openshift-machine-config-operator/machine-config-server-g72m8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.568094 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7s6n\" (UniqueName: \"kubernetes.io/projected/3e83b774-3784-4b56-b452-a3a04fc9929f-kube-api-access-v7s6n\") pod \"service-ca-9c57cc56f-7l2r7\" (UID: \"3e83b774-3784-4b56-b452-a3a04fc9929f\") " pod="openshift-service-ca/service-ca-9c57cc56f-7l2r7" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.585052 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d21b0b26-1895-45e4-bf96-1efab1f33644-webhook-cert\") pod \"packageserver-d55dfcdfc-b772q\" (UID: \"d21b0b26-1895-45e4-bf96-1efab1f33644\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b772q" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.585368 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/693a6651-227a-4a62-85df-4a7e667c3daf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mfftk\" (UID: \"693a6651-227a-4a62-85df-4a7e667c3daf\") " pod="openshift-marketplace/marketplace-operator-79b997595-mfftk" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.585483 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/329e20a2-8966-48c0-8300-bc996770880d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.590455 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzpsw\" (UniqueName: \"kubernetes.io/projected/d7046c26-d46d-419b-817d-a675e207d07c-kube-api-access-zzpsw\") pod \"dns-default-jnlh5\" (UID: \"d7046c26-d46d-419b-817d-a675e207d07c\") " pod="openshift-dns/dns-default-jnlh5" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.592676 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7046c26-d46d-419b-817d-a675e207d07c-config-volume\") pod \"dns-default-jnlh5\" (UID: \"d7046c26-d46d-419b-817d-a675e207d07c\") " pod="openshift-dns/dns-default-jnlh5" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.592723 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h6r5\" (UniqueName: \"kubernetes.io/projected/2bbea11f-6abd-4472-af4f-2b838e9ad97e-kube-api-access-6h6r5\") pod \"ingress-canary-26v6w\" (UID: \"2bbea11f-6abd-4472-af4f-2b838e9ad97e\") " pod="openshift-ingress-canary/ingress-canary-26v6w" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.592752 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.592826 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d7046c26-d46d-419b-817d-a675e207d07c-metrics-tls\") pod \"dns-default-jnlh5\" (UID: \"d7046c26-d46d-419b-817d-a675e207d07c\") " pod="openshift-dns/dns-default-jnlh5" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.592844 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2bbea11f-6abd-4472-af4f-2b838e9ad97e-cert\") pod \"ingress-canary-26v6w\" (UID: \"2bbea11f-6abd-4472-af4f-2b838e9ad97e\") " pod="openshift-ingress-canary/ingress-canary-26v6w" Feb 18 00:36:33 crc kubenswrapper[4858]: E0218 00:36:33.593261 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:34.093246837 +0000 UTC m=+147.399083569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.593258 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7046c26-d46d-419b-817d-a675e207d07c-config-volume\") pod \"dns-default-jnlh5\" (UID: \"d7046c26-d46d-419b-817d-a675e207d07c\") " pod="openshift-dns/dns-default-jnlh5" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.600578 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvr4g\" (UniqueName: \"kubernetes.io/projected/7cc6c0de-0fa4-4366-b66d-7e8753c27f9f-kube-api-access-jvr4g\") pod \"csi-hostpathplugin-nmn6s\" (UID: \"7cc6c0de-0fa4-4366-b66d-7e8753c27f9f\") " pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.607243 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpvb2\" (UniqueName: \"kubernetes.io/projected/329e20a2-8966-48c0-8300-bc996770880d-kube-api-access-vpvb2\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.610591 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/329e20a2-8966-48c0-8300-bc996770880d-bound-sa-token\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.629864 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d7046c26-d46d-419b-817d-a675e207d07c-metrics-tls\") pod \"dns-default-jnlh5\" (UID: \"d7046c26-d46d-419b-817d-a675e207d07c\") " pod="openshift-dns/dns-default-jnlh5" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.635121 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2bbea11f-6abd-4472-af4f-2b838e9ad97e-cert\") pod \"ingress-canary-26v6w\" (UID: \"2bbea11f-6abd-4472-af4f-2b838e9ad97e\") " pod="openshift-ingress-canary/ingress-canary-26v6w" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.637485 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c8rl\" (UniqueName: \"kubernetes.io/projected/96fed31a-2574-4ee1-9781-f4cfd1f9c68b-kube-api-access-8c8rl\") pod \"openshift-config-operator-7777fb866f-p8987\" (UID: \"96fed31a-2574-4ee1-9781-f4cfd1f9c68b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8987" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.641075 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzfcd\" (UniqueName: \"kubernetes.io/projected/d21b0b26-1895-45e4-bf96-1efab1f33644-kube-api-access-nzfcd\") pod \"packageserver-d55dfcdfc-b772q\" (UID: \"d21b0b26-1895-45e4-bf96-1efab1f33644\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b772q" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.667603 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-c7bmt"] Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.681131 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8987" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.681654 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cnvd\" (UniqueName: \"kubernetes.io/projected/693a6651-227a-4a62-85df-4a7e667c3daf-kube-api-access-2cnvd\") pod \"marketplace-operator-79b997595-mfftk\" (UID: \"693a6651-227a-4a62-85df-4a7e667c3daf\") " pod="openshift-marketplace/marketplace-operator-79b997595-mfftk" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.682957 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73fd9054-c7ef-49ad-b80e-db70402b6af2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7wmw2\" (UID: \"73fd9054-c7ef-49ad-b80e-db70402b6af2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wmw2" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.694595 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:33 crc kubenswrapper[4858]: E0218 00:36:33.695136 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:34.195118715 +0000 UTC m=+147.500955447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.701041 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s64mx\" (UniqueName: \"kubernetes.io/projected/bfc20a1c-6687-4ddd-baad-b18790cae2f9-kube-api-access-s64mx\") pod \"service-ca-operator-777779d784-kp8d2\" (UID: \"bfc20a1c-6687-4ddd-baad-b18790cae2f9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kp8d2" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.716627 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cjd57"] Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.719370 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvvbg\" (UniqueName: \"kubernetes.io/projected/f80a41ed-22eb-4af8-8374-d22019caf19e-kube-api-access-rvvbg\") pod \"machine-config-server-g72m8\" (UID: \"f80a41ed-22eb-4af8-8374-d22019caf19e\") " pod="openshift-machine-config-operator/machine-config-server-g72m8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.737537 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qfml\" (UniqueName: \"kubernetes.io/projected/706d9c75-e27c-4596-80d8-68bf71015ca0-kube-api-access-2qfml\") pod \"package-server-manager-789f6589d5-smdtb\" (UID: \"706d9c75-e27c-4596-80d8-68bf71015ca0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-smdtb" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.737811 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7l2r7" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.754890 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b772q" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.755985 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kp8d2" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.770879 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mfftk" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.785419 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-g72m8" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.789231 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s7cl\" (UniqueName: \"kubernetes.io/projected/79427318-6288-4dd5-8209-dae415c0dab4-kube-api-access-7s7cl\") pod \"machine-config-controller-84d6567774-wswhj\" (UID: \"79427318-6288-4dd5-8209-dae415c0dab4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wswhj" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.799057 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:33 crc kubenswrapper[4858]: E0218 00:36:33.799406 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:34.299394601 +0000 UTC m=+147.605231333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.805148 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.865806 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h6r5\" (UniqueName: \"kubernetes.io/projected/2bbea11f-6abd-4472-af4f-2b838e9ad97e-kube-api-access-6h6r5\") pod \"ingress-canary-26v6w\" (UID: \"2bbea11f-6abd-4472-af4f-2b838e9ad97e\") " pod="openshift-ingress-canary/ingress-canary-26v6w" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.883912 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzpsw\" (UniqueName: \"kubernetes.io/projected/d7046c26-d46d-419b-817d-a675e207d07c-kube-api-access-zzpsw\") pod \"dns-default-jnlh5\" (UID: \"d7046c26-d46d-419b-817d-a675e207d07c\") " pod="openshift-dns/dns-default-jnlh5" Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.901473 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:33 crc kubenswrapper[4858]: E0218 00:36:33.901762 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:34.40174653 +0000 UTC m=+147.707583262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.901811 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:33 crc kubenswrapper[4858]: E0218 00:36:33.902081 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:34.402074379 +0000 UTC m=+147.707911111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:33 crc kubenswrapper[4858]: I0218 00:36:33.955986 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wmw2" Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.003514 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.003663 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wswhj" Feb 18 00:36:34 crc kubenswrapper[4858]: E0218 00:36:34.003948 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:34.503928816 +0000 UTC m=+147.809765558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.030726 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-smdtb" Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.106259 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:34 crc kubenswrapper[4858]: E0218 00:36:34.106815 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:34.606796888 +0000 UTC m=+147.912633620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.114049 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-26v6w" Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.127776 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jnlh5" Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.208191 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:34 crc kubenswrapper[4858]: E0218 00:36:34.208644 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:34.708623245 +0000 UTC m=+148.014459977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.311196 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:34 crc kubenswrapper[4858]: E0218 00:36:34.311879 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:34.811855405 +0000 UTC m=+148.117692137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.353270 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6djsl"] Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.367948 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mxwrb" event={"ID":"afbe6075-e81f-464a-bfb5-7e97510ee945","Type":"ContainerStarted","Data":"bed54cbcff8817fc0f1b3a89cd727f80cd311b615f1dced46b4592ec576a6208"} Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.367989 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mxwrb" event={"ID":"afbe6075-e81f-464a-bfb5-7e97510ee945","Type":"ContainerStarted","Data":"4909fade28da036f95293f08fa02cd9fd5d2de7b4ab2a7dae63247d41aca3da2"} Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.405716 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" event={"ID":"92e95ff1-a825-4d17-825f-f4765353a5f2","Type":"ContainerStarted","Data":"e93c5ad4705f0bc55db0404ec4ae73703bfedc19dfa10b9c8b24d300db044651"} Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.413119 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:34 crc kubenswrapper[4858]: E0218 00:36:34.413385 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:34.913370474 +0000 UTC m=+148.219207206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.423644 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-mxwrb" Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.424406 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kzbz5" event={"ID":"896376a1-7809-4597-a315-2089547c2f89","Type":"ContainerStarted","Data":"090e89234015ab1b70f15d1e4a31cd3bbf6f04be90c2933c39195421417f3ea5"} Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.424434 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kzbz5" event={"ID":"896376a1-7809-4597-a315-2089547c2f89","Type":"ContainerStarted","Data":"afbdf7f65b579769cfc396bef7db14f980ae87d34134d0bd26c1fd35dd0cea5c"} Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.430289 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-g72m8" event={"ID":"f80a41ed-22eb-4af8-8374-d22019caf19e","Type":"ContainerStarted","Data":"d2ac4f1b57a217f5da1f7be8e644787d459d0006db8c5e38986861ddd4872493"} Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.430317 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-g72m8" event={"ID":"f80a41ed-22eb-4af8-8374-d22019caf19e","Type":"ContainerStarted","Data":"40e00bc5f7a9b643a02315507c6e5b27a08474f052545ed1b3316feb90790ea1"} Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.442929 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c7bmt" event={"ID":"b0bd9345-840f-40e8-946d-b646e19a6b39","Type":"ContainerStarted","Data":"2d0a3f13708c9c5e54b29e3d8245845a83b8dcd78ed3b2716148d0f0c6f5c6e6"} Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.442980 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c7bmt" event={"ID":"b0bd9345-840f-40e8-946d-b646e19a6b39","Type":"ContainerStarted","Data":"7a002c3cfabc3638e7fe08567aabc46a3ebc60e5e688f5484e663b6571cd6dd7"} Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.461937 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-lpg4n" podStartSLOduration=122.46192188 podStartE2EDuration="2m2.46192188s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:34.459870619 +0000 UTC m=+147.765707341" watchObservedRunningTime="2026-02-18 00:36:34.46192188 +0000 UTC m=+147.767758612" Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.462154 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-574cj" event={"ID":"93a92f34-d9a8-4276-8a97-3f129c4db452","Type":"ContainerStarted","Data":"f94b7f5f0cc76fb8cc7b1f1fcb544f3e36e27dce2a15712b43e6a515253b0f5b"} Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.462189 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-574cj" event={"ID":"93a92f34-d9a8-4276-8a97-3f129c4db452","Type":"ContainerStarted","Data":"46fee5d148cc7f83d0a2a923994cc809518b07089dfad70fba2877651057837b"} Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.474855 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-kqbdg" event={"ID":"849a4228-f4ae-4b7f-a2c8-5db413e4dd28","Type":"ContainerStarted","Data":"f946b349376def9e0f8ec44d4a0ac6c993b95035e5aaa9c812153cc0c30b3c10"} Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.509242 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29522880-n4bvp"] Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.509878 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" podStartSLOduration=122.509861359 podStartE2EDuration="2m2.509861359s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:34.507129012 +0000 UTC m=+147.812965744" watchObservedRunningTime="2026-02-18 00:36:34.509861359 +0000 UTC m=+147.815698091" Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.514389 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:34 crc kubenswrapper[4858]: E0218 00:36:34.514930 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:35.014917573 +0000 UTC m=+148.320754305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.527710 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wnh9x"] Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.617975 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:34 crc kubenswrapper[4858]: E0218 00:36:34.619658 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:35.119641771 +0000 UTC m=+148.425478503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.654384 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-s76q5" podStartSLOduration=122.654367996 podStartE2EDuration="2m2.654367996s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:34.65370016 +0000 UTC m=+147.959536892" watchObservedRunningTime="2026-02-18 00:36:34.654367996 +0000 UTC m=+147.960204728" Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.660853 4858 patch_prober.go:28] interesting pod/router-default-5444994796-mxwrb container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.660971 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mxwrb" podUID="afbe6075-e81f-464a-bfb5-7e97510ee945" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.664094 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.705150 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c7nlk" podStartSLOduration=122.705131795 podStartE2EDuration="2m2.705131795s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:34.704746086 +0000 UTC m=+148.010582818" watchObservedRunningTime="2026-02-18 00:36:34.705131795 +0000 UTC m=+148.010968527" Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.723036 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:34 crc kubenswrapper[4858]: E0218 00:36:34.723335 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:35.223325183 +0000 UTC m=+148.529161915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.823752 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:34 crc kubenswrapper[4858]: E0218 00:36:34.824260 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:35.324244237 +0000 UTC m=+148.630080969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.844758 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng" podStartSLOduration=121.844742462 podStartE2EDuration="2m1.844742462s" podCreationTimestamp="2026-02-18 00:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:34.82393203 +0000 UTC m=+148.129768762" watchObservedRunningTime="2026-02-18 00:36:34.844742462 +0000 UTC m=+148.150579194" Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.845801 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-n879l"] Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.860689 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-9jdl7"] Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.884430 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mkz5l"] Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.892621 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q"] Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.906577 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8drdx"] Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.932121 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:34 crc kubenswrapper[4858]: E0218 00:36:34.932449 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:35.432435401 +0000 UTC m=+148.738272133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.938431 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-54bhc"] Feb 18 00:36:34 crc kubenswrapper[4858]: I0218 00:36:34.974295 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-mxwrb" podStartSLOduration=122.97427997 podStartE2EDuration="2m2.97427997s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:34.939775941 +0000 UTC m=+148.245612663" watchObservedRunningTime="2026-02-18 00:36:34.97427997 +0000 UTC m=+148.280116702" Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.007447 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4l24x" podStartSLOduration=123.007427796 podStartE2EDuration="2m3.007427796s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:34.974984678 +0000 UTC m=+148.280821410" watchObservedRunningTime="2026-02-18 00:36:35.007427796 +0000 UTC m=+148.313264528" Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.033379 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:35 crc kubenswrapper[4858]: E0218 00:36:35.033727 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:35.533702663 +0000 UTC m=+148.839539395 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.033819 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:35 crc kubenswrapper[4858]: E0218 00:36:35.034295 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:35.534286617 +0000 UTC m=+148.840123349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.137345 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:35 crc kubenswrapper[4858]: E0218 00:36:35.137847 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:35.637822096 +0000 UTC m=+148.943658828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.226194 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mfftk"] Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.240332 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjmlv"] Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.241451 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:35 crc kubenswrapper[4858]: E0218 00:36:35.241830 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:35.741816366 +0000 UTC m=+149.047653098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.243219 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zk6hd"] Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.270550 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gtw2k"] Feb 18 00:36:35 crc kubenswrapper[4858]: W0218 00:36:35.328974 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44c5ceae_0c80_4b01_a773_8c222c900f34.slice/crio-10b9714b1633c1fdb19da3ef51c7234798e031a042f1bf42d7a2bdbefb071938 WatchSource:0}: Error finding container 10b9714b1633c1fdb19da3ef51c7234798e031a042f1bf42d7a2bdbefb071938: Status 404 returned error can't find the container with id 10b9714b1633c1fdb19da3ef51c7234798e031a042f1bf42d7a2bdbefb071938 Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.341899 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.342081 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.342110 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.342150 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.342188 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.343351 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:36:35 crc kubenswrapper[4858]: E0218 00:36:35.343529 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:35.843468558 +0000 UTC m=+149.149305290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.361733 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.362238 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.362364 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.373745 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-27k9h" podStartSLOduration=122.373731213 podStartE2EDuration="2m2.373731213s" podCreationTimestamp="2026-02-18 00:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:35.339960702 +0000 UTC m=+148.645797444" watchObservedRunningTime="2026-02-18 00:36:35.373731213 +0000 UTC m=+148.679567945" Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.430115 4858 patch_prober.go:28] interesting pod/router-default-5444994796-mxwrb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:36:35 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 18 00:36:35 crc kubenswrapper[4858]: [+]process-running ok Feb 18 00:36:35 crc kubenswrapper[4858]: healthz check failed Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.430167 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mxwrb" podUID="afbe6075-e81f-464a-bfb5-7e97510ee945" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.437975 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.445181 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:35 crc kubenswrapper[4858]: E0218 00:36:35.445563 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:35.945551921 +0000 UTC m=+149.251388653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.454848 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.456389 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.546944 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:35 crc kubenswrapper[4858]: E0218 00:36:35.547213 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:36.047198813 +0000 UTC m=+149.353035545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.587466 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vzdrt" podStartSLOduration=123.587447843 podStartE2EDuration="2m3.587447843s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:35.579073717 +0000 UTC m=+148.884910439" watchObservedRunningTime="2026-02-18 00:36:35.587447843 +0000 UTC m=+148.893284575" Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.605001 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-nmn6s"] Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.605044 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9dqf6"] Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.605057 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522910-dtdsw"] Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.605065 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-n2kdn"] Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.627506 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjmlv" event={"ID":"f8506161-354f-42a0-8f15-9c02ba3fe215","Type":"ContainerStarted","Data":"7840373482efcaff7379519d124df6a02e290af42a3f5219976b4b2537f7847a"} Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.628166 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kzbz5" podStartSLOduration=123.628144474 podStartE2EDuration="2m3.628144474s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:35.626954776 +0000 UTC m=+148.932791508" watchObservedRunningTime="2026-02-18 00:36:35.628144474 +0000 UTC m=+148.933981206" Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.649278 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:35 crc kubenswrapper[4858]: E0218 00:36:35.649791 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:36.149779798 +0000 UTC m=+149.455616530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.656280 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bc7mz"] Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.676871 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-574cj" podStartSLOduration=122.676853204 podStartE2EDuration="2m2.676853204s" podCreationTimestamp="2026-02-18 00:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:35.673688046 +0000 UTC m=+148.979524778" watchObservedRunningTime="2026-02-18 00:36:35.676853204 +0000 UTC m=+148.982689936" Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.682427 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mfftk" event={"ID":"693a6651-227a-4a62-85df-4a7e667c3daf","Type":"ContainerStarted","Data":"69250dbe0111e7b2ad581e8c3de9f6fd90f0032a5a1e45a9a1f528cbf047eebd"} Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.729005 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7l2r7"] Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.751982 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-g72m8" podStartSLOduration=5.751964823 podStartE2EDuration="5.751964823s" podCreationTimestamp="2026-02-18 00:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:35.747238037 +0000 UTC m=+149.053074769" watchObservedRunningTime="2026-02-18 00:36:35.751964823 +0000 UTC m=+149.057801555" Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.753961 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:35 crc kubenswrapper[4858]: E0218 00:36:35.754196 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:36.254185428 +0000 UTC m=+149.560022160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.755546 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p8987"] Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.763878 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c7bmt" event={"ID":"b0bd9345-840f-40e8-946d-b646e19a6b39","Type":"ContainerStarted","Data":"fb5366558d90dc12279a44e5a7c4a9317394058050c6abeba932200f904b6a42"} Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.775466 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wswhj"] Feb 18 00:36:35 crc kubenswrapper[4858]: W0218 00:36:35.777406 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64aaf596_bd11_435d_97ae_0c02f0f93c9f.slice/crio-30db84ba93e285f4a3f8cc6338123925c11d69de9dbdaddce0feb4429fe3f820 WatchSource:0}: Error finding container 30db84ba93e285f4a3f8cc6338123925c11d69de9dbdaddce0feb4429fe3f820: Status 404 returned error can't find the container with id 30db84ba93e285f4a3f8cc6338123925c11d69de9dbdaddce0feb4429fe3f820 Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.779230 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-9jdl7" event={"ID":"c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c","Type":"ContainerStarted","Data":"52c2e1ce993b668908261f097babbb92388ca24ad0568efc2379f80301b3adf9"} Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.828693 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wmw2"] Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.831951 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b772q"] Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.834885 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c7bmt" podStartSLOduration=122.834866563 podStartE2EDuration="2m2.834866563s" podCreationTimestamp="2026-02-18 00:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:35.805622753 +0000 UTC m=+149.111459485" watchObservedRunningTime="2026-02-18 00:36:35.834866563 +0000 UTC m=+149.140703295" Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.838075 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-kp8d2"] Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.856456 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-26v6w"] Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.857327 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:35 crc kubenswrapper[4858]: E0218 00:36:35.859289 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:36.359276105 +0000 UTC m=+149.665112837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.878210 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jnlh5"] Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.880468 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-smdtb"] Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.887171 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" event={"ID":"92e95ff1-a825-4d17-825f-f4765353a5f2","Type":"ContainerStarted","Data":"ef896e8beecc19f2288d407ae9d629548dbb2dcbed5b1d3053e8e56205c5c8d3"} Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.887424 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.892197 4858 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-cjd57 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" start-of-body= Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.892238 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" podUID="92e95ff1-a825-4d17-825f-f4765353a5f2" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.915804 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zk6hd" event={"ID":"44c5ceae-0c80-4b01-a773-8c222c900f34","Type":"ContainerStarted","Data":"10b9714b1633c1fdb19da3ef51c7234798e031a042f1bf42d7a2bdbefb071938"} Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.918388 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" event={"ID":"a4447453-79a5-4008-89ec-add924803b82","Type":"ContainerStarted","Data":"0a4ed2bea4093b6d302f40c999239c14724d9b1e5ffb8af76351f17704dda0b2"} Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.919781 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wnh9x" event={"ID":"f95d7fd9-797f-464f-ac5e-e78c353e78ee","Type":"ContainerStarted","Data":"cd9b443d0573bda6011bf8191f16282f2c9db28422d1d3ef92f02087e28a947f"} Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.919800 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wnh9x" event={"ID":"f95d7fd9-797f-464f-ac5e-e78c353e78ee","Type":"ContainerStarted","Data":"8b653f8864fd9746a78e15f9601bbd5844b261a06fa5ed6b7de9c831e1e1e39c"} Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.927735 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mkz5l" event={"ID":"08ad236d-4644-4e49-b9d5-194b2746a760","Type":"ContainerStarted","Data":"e722e50ee4b2666c861cf02843249c6363eb5de5b383f694271213eeac9ff402"} Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.954308 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" podStartSLOduration=123.954287772 podStartE2EDuration="2m3.954287772s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:35.933738747 +0000 UTC m=+149.239575469" watchObservedRunningTime="2026-02-18 00:36:35.954287772 +0000 UTC m=+149.260124504" Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.955586 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wnh9x" podStartSLOduration=122.955581274 podStartE2EDuration="2m2.955581274s" podCreationTimestamp="2026-02-18 00:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:35.955059032 +0000 UTC m=+149.260895764" watchObservedRunningTime="2026-02-18 00:36:35.955581274 +0000 UTC m=+149.261418006" Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.962525 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gtw2k" event={"ID":"646ba69d-8375-436b-a16f-e7bae5475ac6","Type":"ContainerStarted","Data":"32de71814df7106f80974d756ca05fada427ca3048ee37c1b9db00089a02b958"} Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.963314 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:35 crc kubenswrapper[4858]: E0218 00:36:35.966928 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:36.466911963 +0000 UTC m=+149.772748695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.976109 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mkz5l" podStartSLOduration=123.976095039 podStartE2EDuration="2m3.976095039s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:35.974698885 +0000 UTC m=+149.280535617" watchObservedRunningTime="2026-02-18 00:36:35.976095039 +0000 UTC m=+149.281931771" Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.978177 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-54bhc" event={"ID":"eef415f0-0fe2-4c5c-a528-3394ce644ff1","Type":"ContainerStarted","Data":"8b6d2653f59860598672724b1ab26918d6c03900aed820f9fda74651af2e0812"} Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.978222 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-54bhc" event={"ID":"eef415f0-0fe2-4c5c-a528-3394ce644ff1","Type":"ContainerStarted","Data":"cc2ca88b6ed1df60b6ac6e65bb8f8ff5e3a6946e101b2f93400b4d9a6c4758dd"} Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.987633 4858 csr.go:261] certificate signing request csr-4gh88 is approved, waiting to be issued Feb 18 00:36:35 crc kubenswrapper[4858]: I0218 00:36:35.997855 4858 csr.go:257] certificate signing request csr-4gh88 is issued Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.014030 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n879l" event={"ID":"2798c53f-d277-411d-b95d-3439db650d71","Type":"ContainerStarted","Data":"f0b108675bef78ba0b55474739171bd84679818b9b4732445e900121d69d7da0"} Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.062977 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8drdx" event={"ID":"12b8a5f7-869c-4343-8224-ae76d73073cf","Type":"ContainerStarted","Data":"df3ec6a767daa038f917fba638ac8e9b3cb8cae8a1d0e1fe6128f914ac4f9735"} Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.067041 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:36 crc kubenswrapper[4858]: E0218 00:36:36.068074 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:36.568063324 +0000 UTC m=+149.873900056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.084464 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6djsl" event={"ID":"b132ff06-2a28-42df-b43b-f923a76b4cca","Type":"ContainerStarted","Data":"378ad8635c4dad3a9d30d80ec07dbe2a1fe300ff732da5ce9fadc96fba924f8b"} Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.084517 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6djsl" event={"ID":"b132ff06-2a28-42df-b43b-f923a76b4cca","Type":"ContainerStarted","Data":"2acff62ee69dcdc13ded398bdbf2a7a0b6faa3fa4bb9bdd518b1d65e84552630"} Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.088068 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-6djsl" Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.089485 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8drdx" podStartSLOduration=124.0894697 podStartE2EDuration="2m4.0894697s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:36.087473651 +0000 UTC m=+149.393310383" watchObservedRunningTime="2026-02-18 00:36:36.0894697 +0000 UTC m=+149.395306432" Feb 18 00:36:36 crc kubenswrapper[4858]: W0218 00:36:36.089575 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-4d5250f5935f814ab965a5f455dbd8bb03cd16c5d0589668aa9784e8e73dd5fd WatchSource:0}: Error finding container 4d5250f5935f814ab965a5f455dbd8bb03cd16c5d0589668aa9784e8e73dd5fd: Status 404 returned error can't find the container with id 4d5250f5935f814ab965a5f455dbd8bb03cd16c5d0589668aa9784e8e73dd5fd Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.089646 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-6djsl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.089681 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6djsl" podUID="b132ff06-2a28-42df-b43b-f923a76b4cca" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.110988 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-6djsl" podStartSLOduration=124.110970279 podStartE2EDuration="2m4.110970279s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:36.108613152 +0000 UTC m=+149.414449884" watchObservedRunningTime="2026-02-18 00:36:36.110970279 +0000 UTC m=+149.416807011" Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.123436 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29522880-n4bvp" event={"ID":"5977586b-6538-4050-bfde-dde62e4d87cd","Type":"ContainerStarted","Data":"1f69f73bd6d9598c297113b1b247f931194ceace1ff463434880b6dba90caef0"} Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.123477 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29522880-n4bvp" event={"ID":"5977586b-6538-4050-bfde-dde62e4d87cd","Type":"ContainerStarted","Data":"3ef2d1fbd2355beb4b124f9f7ce775887e3da6b72135e8e274cf11a648852266"} Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.134853 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-kqbdg" event={"ID":"849a4228-f4ae-4b7f-a2c8-5db413e4dd28","Type":"ContainerStarted","Data":"91683a5b54c434b9fb2d11ee3f81afbaf0cef59649f413f4512932fde9d00177"} Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.140578 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29522880-n4bvp" podStartSLOduration=124.140567567 podStartE2EDuration="2m4.140567567s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:36.139077541 +0000 UTC m=+149.444914273" watchObservedRunningTime="2026-02-18 00:36:36.140567567 +0000 UTC m=+149.446404299" Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.160227 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-kqbdg" podStartSLOduration=124.160209771 podStartE2EDuration="2m4.160209771s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:36.157901985 +0000 UTC m=+149.463738717" watchObservedRunningTime="2026-02-18 00:36:36.160209771 +0000 UTC m=+149.466046503" Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.169180 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:36 crc kubenswrapper[4858]: E0218 00:36:36.169834 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:36.669816617 +0000 UTC m=+149.975653349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.271552 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:36 crc kubenswrapper[4858]: E0218 00:36:36.274333 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:36.77432043 +0000 UTC m=+150.080157162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.375432 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:36 crc kubenswrapper[4858]: E0218 00:36:36.376029 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:36.876009413 +0000 UTC m=+150.181846145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.426659 4858 patch_prober.go:28] interesting pod/router-default-5444994796-mxwrb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:36:36 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 18 00:36:36 crc kubenswrapper[4858]: [+]process-running ok Feb 18 00:36:36 crc kubenswrapper[4858]: healthz check failed Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.426716 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mxwrb" podUID="afbe6075-e81f-464a-bfb5-7e97510ee945" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:36:36 crc kubenswrapper[4858]: W0218 00:36:36.438378 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-3b57fabffb14c32841fcfc6c9b18969f198de5110d0972c0aaa6e83eb3731916 WatchSource:0}: Error finding container 3b57fabffb14c32841fcfc6c9b18969f198de5110d0972c0aaa6e83eb3731916: Status 404 returned error can't find the container with id 3b57fabffb14c32841fcfc6c9b18969f198de5110d0972c0aaa6e83eb3731916 Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.477643 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:36 crc kubenswrapper[4858]: E0218 00:36:36.479422 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:36.979407408 +0000 UTC m=+150.285244140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.579142 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:36 crc kubenswrapper[4858]: E0218 00:36:36.579315 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:37.079294877 +0000 UTC m=+150.385131609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.579699 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:36 crc kubenswrapper[4858]: E0218 00:36:36.580098 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:37.080088956 +0000 UTC m=+150.385925688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.680349 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:36 crc kubenswrapper[4858]: E0218 00:36:36.680649 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:37.180634341 +0000 UTC m=+150.486471073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.781238 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:36 crc kubenswrapper[4858]: E0218 00:36:36.784236 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:37.283490413 +0000 UTC m=+150.589327165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.882283 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:36 crc kubenswrapper[4858]: E0218 00:36:36.882430 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:37.382407687 +0000 UTC m=+150.688244419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.882559 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:36 crc kubenswrapper[4858]: E0218 00:36:36.882882 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:37.382867519 +0000 UTC m=+150.688704251 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.983657 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:36 crc kubenswrapper[4858]: E0218 00:36:36.983792 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:37.483774213 +0000 UTC m=+150.789610945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.984217 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:36 crc kubenswrapper[4858]: E0218 00:36:36.984553 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:37.484541191 +0000 UTC m=+150.790377923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.999127 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-18 00:31:35 +0000 UTC, rotation deadline is 2026-11-16 00:50:37.597248281 +0000 UTC Feb 18 00:36:36 crc kubenswrapper[4858]: I0218 00:36:36.999176 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6504h14m0.598074589s for next certificate rotation Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.084932 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:37 crc kubenswrapper[4858]: E0218 00:36:37.085203 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:37.585175369 +0000 UTC m=+150.891012101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.147286 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n879l" event={"ID":"2798c53f-d277-411d-b95d-3439db650d71","Type":"ContainerStarted","Data":"03fa1ebca033e1fc01e56b529ac40b6973c227d1558e77892f709567891febbf"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.147611 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n879l" event={"ID":"2798c53f-d277-411d-b95d-3439db650d71","Type":"ContainerStarted","Data":"2f3f958937251cb68512086777c123a3a0619f1748035320bbb95d81c14dd78c"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.149895 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wswhj" event={"ID":"79427318-6288-4dd5-8209-dae415c0dab4","Type":"ContainerStarted","Data":"3320665da0aea93aaeabb3a7d12a24d9c23fbc23c8459c49ac6e5b99114ae88f"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.149933 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wswhj" event={"ID":"79427318-6288-4dd5-8209-dae415c0dab4","Type":"ContainerStarted","Data":"5727a850a8795bff4d86a25552d158dd3488eb4f70eeda313c6ac39eb23d130b"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.149944 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wswhj" event={"ID":"79427318-6288-4dd5-8209-dae415c0dab4","Type":"ContainerStarted","Data":"575e20cb3fd17c504d805ad974162d99735c35aa822be7877838ed91f65119b1"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.152925 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-54bhc" event={"ID":"eef415f0-0fe2-4c5c-a528-3394ce644ff1","Type":"ContainerStarted","Data":"8dfdafddaa79951a08cd8940e22722c61fb639ed0cb030ee7eb9d1d98350d689"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.162473 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1edb56bf5b6f6fc7637553d23ae1db1881bca40384d437fd1bd4ec03c4844700"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.162526 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"63b4e096e9cf3a2298232ec19ba6d490cf7f5319c5d05a19dc34b3df5fa9b6ca"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.162716 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.172044 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mkz5l" event={"ID":"08ad236d-4644-4e49-b9d5-194b2746a760","Type":"ContainerStarted","Data":"e6c7c9e1bdb964b2fb210b3aace5d215830a686c7385a3c408b7a2eaf5bf81f8"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.175520 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n879l" podStartSLOduration=125.175505222 podStartE2EDuration="2m5.175505222s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:37.173740868 +0000 UTC m=+150.479577600" watchObservedRunningTime="2026-02-18 00:36:37.175505222 +0000 UTC m=+150.481341954" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.186780 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:37 crc kubenswrapper[4858]: E0218 00:36:37.187157 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:37.687145759 +0000 UTC m=+150.992982481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.190182 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b772q" event={"ID":"d21b0b26-1895-45e4-bf96-1efab1f33644","Type":"ContainerStarted","Data":"f1e830fca68db1409c5589d59f04d4aab8fea65076ad1f358cbb1cf94cff44d2"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.190224 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b772q" event={"ID":"d21b0b26-1895-45e4-bf96-1efab1f33644","Type":"ContainerStarted","Data":"29f071c2803cd2bfa9b54b0d5b9458438b5c2a0b7327f062f985c6af3777819c"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.190241 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b772q" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.192631 4858 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-b772q container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.192675 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b772q" podUID="d21b0b26-1895-45e4-bf96-1efab1f33644" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.196308 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wswhj" podStartSLOduration=124.196298563 podStartE2EDuration="2m4.196298563s" podCreationTimestamp="2026-02-18 00:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:37.195814892 +0000 UTC m=+150.501651624" watchObservedRunningTime="2026-02-18 00:36:37.196298563 +0000 UTC m=+150.502135295" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.203793 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wmw2" event={"ID":"73fd9054-c7ef-49ad-b80e-db70402b6af2","Type":"ContainerStarted","Data":"2eebc90410868fc25f6d62ea89187d3e26270cacde190d07d4ca8f7212207b21"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.203844 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wmw2" event={"ID":"73fd9054-c7ef-49ad-b80e-db70402b6af2","Type":"ContainerStarted","Data":"0a5e4262f22b302a320fa7dfd7bb69cb0f17f9dd34aa068475c5728769a3c4de"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.211043 4858 generic.go:334] "Generic (PLEG): container finished" podID="64aaf596-bd11-435d-97ae-0c02f0f93c9f" containerID="14f939fc962251b66d4178c6a6a0db628d22ff2f5d8a29cd35768265f921daf3" exitCode=0 Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.211127 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" event={"ID":"64aaf596-bd11-435d-97ae-0c02f0f93c9f","Type":"ContainerDied","Data":"14f939fc962251b66d4178c6a6a0db628d22ff2f5d8a29cd35768265f921daf3"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.211151 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" event={"ID":"64aaf596-bd11-435d-97ae-0c02f0f93c9f","Type":"ContainerStarted","Data":"30db84ba93e285f4a3f8cc6338123925c11d69de9dbdaddce0feb4429fe3f820"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.220062 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7l2r7" event={"ID":"3e83b774-3784-4b56-b452-a3a04fc9929f","Type":"ContainerStarted","Data":"3838dbd97f56d99954f8cfe1b31d52f9c05bcb1b640ef4d644d72a77d370f31a"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.220107 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7l2r7" event={"ID":"3e83b774-3784-4b56-b452-a3a04fc9929f","Type":"ContainerStarted","Data":"2f7aa0ee541c816122c815a369fde7c38343245a688bb55b1b987836ccd9a6b4"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.226232 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-9jdl7" event={"ID":"c4742d2d-6f4b-4dfb-869e-ed06f7c81a0c","Type":"ContainerStarted","Data":"695484cd854598d2de89512ddd08918a29371322bb6fcd522474de1fdfb23bb5"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.242965 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jnlh5" event={"ID":"d7046c26-d46d-419b-817d-a675e207d07c","Type":"ContainerStarted","Data":"1468b395daeaf5a92e8b2e962fce833ec6899130d3e31e755d0ab61342bc52a6"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.243010 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jnlh5" event={"ID":"d7046c26-d46d-419b-817d-a675e207d07c","Type":"ContainerStarted","Data":"c546606248e6bbf7eabf89c0de0532a6bedc32fa3476d00c1e7f9c2b3270ddd1"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.243019 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jnlh5" event={"ID":"d7046c26-d46d-419b-817d-a675e207d07c","Type":"ContainerStarted","Data":"e4db601c70f46a681e578e0872517b378e9720e77ac1fcc495c06b3a4941cf34"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.243708 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-jnlh5" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.243973 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-54bhc" podStartSLOduration=124.243957857 podStartE2EDuration="2m4.243957857s" podCreationTimestamp="2026-02-18 00:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:37.241065745 +0000 UTC m=+150.546902467" watchObservedRunningTime="2026-02-18 00:36:37.243957857 +0000 UTC m=+150.549794589" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.252322 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zk6hd" event={"ID":"44c5ceae-0c80-4b01-a773-8c222c900f34","Type":"ContainerStarted","Data":"924b5688b1c2f630b4dbc2ddd72ef8eb354e96c3b6d1797995690d36127b5ffb"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.257853 4858 generic.go:334] "Generic (PLEG): container finished" podID="96fed31a-2574-4ee1-9781-f4cfd1f9c68b" containerID="130cf37e6145eb3d38478281c63fd7ce9c5b648dcde1ccda0de3e9fb08679e0c" exitCode=0 Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.257944 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8987" event={"ID":"96fed31a-2574-4ee1-9781-f4cfd1f9c68b","Type":"ContainerDied","Data":"130cf37e6145eb3d38478281c63fd7ce9c5b648dcde1ccda0de3e9fb08679e0c"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.257969 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8987" event={"ID":"96fed31a-2574-4ee1-9781-f4cfd1f9c68b","Type":"ContainerStarted","Data":"7a7bfc379fb18a622f9ecaca002f44016c5e9f6411aa90c93c7f60a800eec29e"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.259512 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b772q" podStartSLOduration=124.259474889 podStartE2EDuration="2m4.259474889s" podCreationTimestamp="2026-02-18 00:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:37.25792202 +0000 UTC m=+150.563758752" watchObservedRunningTime="2026-02-18 00:36:37.259474889 +0000 UTC m=+150.565311621" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.274903 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-26v6w" event={"ID":"2bbea11f-6abd-4472-af4f-2b838e9ad97e","Type":"ContainerStarted","Data":"69f7f4cbcec6d563be90279bd8bc2bec87588794f718b9736dd165a773e487da"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.274958 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-26v6w" event={"ID":"2bbea11f-6abd-4472-af4f-2b838e9ad97e","Type":"ContainerStarted","Data":"4f9af18441bb3fe9ec858d995fd797b6de24c4bac9753d90c8ec7aacc3462dcc"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.280548 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-7l2r7" podStartSLOduration=124.280526927 podStartE2EDuration="2m4.280526927s" podCreationTimestamp="2026-02-18 00:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:37.274590131 +0000 UTC m=+150.580426863" watchObservedRunningTime="2026-02-18 00:36:37.280526927 +0000 UTC m=+150.586363659" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.283476 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" event={"ID":"7cc6c0de-0fa4-4366-b66d-7e8753c27f9f","Type":"ContainerStarted","Data":"e31ee4270c03852a1016d751d15c743e105de9a3e72ae1543a73eac17d2258fd"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.287517 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:37 crc kubenswrapper[4858]: E0218 00:36:37.287669 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:37.787650443 +0000 UTC m=+151.093487185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.287882 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:37 crc kubenswrapper[4858]: E0218 00:36:37.294298 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:37.794284295 +0000 UTC m=+151.100121027 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.296218 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mfftk" event={"ID":"693a6651-227a-4a62-85df-4a7e667c3daf","Type":"ContainerStarted","Data":"4d539ecc6f518ce60a75b7e8e902f38b5e861d6f94bb6de6aa4a318e41d7343f"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.296988 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mfftk" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.310646 4858 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mfftk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.310702 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mfftk" podUID="693a6651-227a-4a62-85df-4a7e667c3daf" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.322186 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"83b1d3aac1dd802e86277b93bddb7408d42e65fa056265e0e1be36df0506e7f5"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.322241 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"3b57fabffb14c32841fcfc6c9b18969f198de5110d0972c0aaa6e83eb3731916"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.328476 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"30b8bef90374d074f4a2cef1c8f3041b24ef379e6aa474dd0ccae53922718fc0"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.328535 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"4d5250f5935f814ab965a5f455dbd8bb03cd16c5d0589668aa9784e8e73dd5fd"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.331098 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-dtdsw" event={"ID":"a5bd9f27-973a-4ec3-91b8-87c2c20c6c34","Type":"ContainerStarted","Data":"0108ee457bde4c4b8d70b9b80e5bfd9393784b012bca0065d0eb4c799f9404e3"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.331118 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-dtdsw" event={"ID":"a5bd9f27-973a-4ec3-91b8-87c2c20c6c34","Type":"ContainerStarted","Data":"c1eb5c9b5559aacddc6a85452f9ae65d2c9c946ff7c5e389b7175109166e8512"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.333374 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gtw2k" event={"ID":"646ba69d-8375-436b-a16f-e7bae5475ac6","Type":"ContainerStarted","Data":"54f0d1436fc38870a8d3854014ccd69fe8a8bbc4027241e6bb4cb3a669387d52"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.334119 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-gtw2k" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.335556 4858 patch_prober.go:28] interesting pod/console-operator-58897d9998-gtw2k container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.335584 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gtw2k" podUID="646ba69d-8375-436b-a16f-e7bae5475ac6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.339120 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9dqf6" event={"ID":"d56a2ef9-2679-43f8-bf70-3b8f1eea8c70","Type":"ContainerStarted","Data":"4d88eb291a32412a6ea46d9ac08b9a1b2cb9378609574446e188fdb93df7d510"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.339191 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9dqf6" event={"ID":"d56a2ef9-2679-43f8-bf70-3b8f1eea8c70","Type":"ContainerStarted","Data":"d08008aa20906d959e76afc3cbd6fd29af224a1128f63d5958cc7aacd6a75775"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.339206 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9dqf6" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.340247 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-smdtb" event={"ID":"706d9c75-e27c-4596-80d8-68bf71015ca0","Type":"ContainerStarted","Data":"de05cde6a3b9d7ae8286c07333dd8042197815a0d2ac0840298c0591059f4244"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.340286 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-smdtb" event={"ID":"706d9c75-e27c-4596-80d8-68bf71015ca0","Type":"ContainerStarted","Data":"d184b03eacfff50df620478e465c98b200a116a1d0032f8e84b939ade56598da"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.340311 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-smdtb" event={"ID":"706d9c75-e27c-4596-80d8-68bf71015ca0","Type":"ContainerStarted","Data":"b823ac5e8fc20128cf7e2e21674b3a8ec28ac35398de7b75368f44497d7dfe35"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.341163 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-smdtb" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.341291 4858 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-9dqf6 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.44:8443/healthz\": dial tcp 10.217.0.44:8443: connect: connection refused" start-of-body= Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.341319 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9dqf6" podUID="d56a2ef9-2679-43f8-bf70-3b8f1eea8c70" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.44:8443/healthz\": dial tcp 10.217.0.44:8443: connect: connection refused" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.342595 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjmlv" event={"ID":"f8506161-354f-42a0-8f15-9c02ba3fe215","Type":"ContainerStarted","Data":"7803481575c466f7224c4382157c155079583b4d876976b2d9b1ef5aac9bb8fa"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.343180 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjmlv" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.345866 4858 generic.go:334] "Generic (PLEG): container finished" podID="a4447453-79a5-4008-89ec-add924803b82" containerID="8865826b4288506db0af15d46b31bc717e7f35b19f7523b33c6f60e3dbb0fbea" exitCode=0 Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.345925 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" event={"ID":"a4447453-79a5-4008-89ec-add924803b82","Type":"ContainerDied","Data":"8865826b4288506db0af15d46b31bc717e7f35b19f7523b33c6f60e3dbb0fbea"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.348360 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wmw2" podStartSLOduration=125.348347306 podStartE2EDuration="2m5.348347306s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:37.311066548 +0000 UTC m=+150.616903280" watchObservedRunningTime="2026-02-18 00:36:37.348347306 +0000 UTC m=+150.654184038" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.361790 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-n2kdn" event={"ID":"a3cba07a-2fd4-4794-bae6-53b73a54905a","Type":"ContainerStarted","Data":"7b6988f8a70ea8b2e9b8c229fb0273ded69abc66f88e978c8c363d5d2f3644be"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.361851 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-n2kdn" event={"ID":"a3cba07a-2fd4-4794-bae6-53b73a54905a","Type":"ContainerStarted","Data":"d104c234d9593fcb29937b9ad857d037fbccb52fcff8de74946df45bcd9e481c"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.361861 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-n2kdn" event={"ID":"a3cba07a-2fd4-4794-bae6-53b73a54905a","Type":"ContainerStarted","Data":"5d73697806f9e3868ea573fd37b1da47666674c6622b2d8b1754b90ad4c477e5"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.373017 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjmlv" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.376859 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kp8d2" event={"ID":"bfc20a1c-6687-4ddd-baad-b18790cae2f9","Type":"ContainerStarted","Data":"df1ec9393cb4fa0fd0aed08dc87def56d1731fcd8f047fbf4d7449f3e5a96ec5"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.376891 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kp8d2" event={"ID":"bfc20a1c-6687-4ddd-baad-b18790cae2f9","Type":"ContainerStarted","Data":"59811cfb0a27e644971c377611c8036788a055379003a14674e42c957d754247"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.397036 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8drdx" event={"ID":"12b8a5f7-869c-4343-8224-ae76d73073cf","Type":"ContainerStarted","Data":"718ae9a96d2f3b91325f59258518f4fad129a76b4083e13408540cccde315122"} Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.398680 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.399074 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-6djsl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.399124 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6djsl" podUID="b132ff06-2a28-42df-b43b-f923a76b4cca" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Feb 18 00:36:37 crc kubenswrapper[4858]: E0218 00:36:37.399681 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:37.89966239 +0000 UTC m=+151.205499122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.410304 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mfftk" podStartSLOduration=124.410280041 podStartE2EDuration="2m4.410280041s" podCreationTimestamp="2026-02-18 00:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:37.404704123 +0000 UTC m=+150.710540855" watchObservedRunningTime="2026-02-18 00:36:37.410280041 +0000 UTC m=+150.716116773" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.411277 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-9jdl7" podStartSLOduration=125.411270785 podStartE2EDuration="2m5.411270785s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:37.373352762 +0000 UTC m=+150.679189494" watchObservedRunningTime="2026-02-18 00:36:37.411270785 +0000 UTC m=+150.717107517" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.443540 4858 patch_prober.go:28] interesting pod/router-default-5444994796-mxwrb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:36:37 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 18 00:36:37 crc kubenswrapper[4858]: [+]process-running ok Feb 18 00:36:37 crc kubenswrapper[4858]: healthz check failed Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.443820 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mxwrb" podUID="afbe6075-e81f-464a-bfb5-7e97510ee945" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.494389 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-n2kdn" podStartSLOduration=124.49437123 podStartE2EDuration="2m4.49437123s" podCreationTimestamp="2026-02-18 00:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:37.450756447 +0000 UTC m=+150.756593179" watchObservedRunningTime="2026-02-18 00:36:37.49437123 +0000 UTC m=+150.800207962" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.500642 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:37 crc kubenswrapper[4858]: E0218 00:36:37.500937 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:38.000924882 +0000 UTC m=+151.306761614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.514744 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.604907 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:37 crc kubenswrapper[4858]: E0218 00:36:37.605120 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:38.105098266 +0000 UTC m=+151.410934998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.606030 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:37 crc kubenswrapper[4858]: E0218 00:36:37.606617 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:38.106608494 +0000 UTC m=+151.412445226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.675214 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-dtdsw" podStartSLOduration=125.675191572 podStartE2EDuration="2m5.675191572s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:37.628285047 +0000 UTC m=+150.934121839" watchObservedRunningTime="2026-02-18 00:36:37.675191572 +0000 UTC m=+150.981028304" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.676347 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-26v6w" podStartSLOduration=7.67634152 podStartE2EDuration="7.67634152s" podCreationTimestamp="2026-02-18 00:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:37.674262099 +0000 UTC m=+150.980098831" watchObservedRunningTime="2026-02-18 00:36:37.67634152 +0000 UTC m=+150.982178252" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.711060 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:37 crc kubenswrapper[4858]: E0218 00:36:37.711375 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:38.211360222 +0000 UTC m=+151.517196954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.752576 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-gtw2k" podStartSLOduration=125.752562246 podStartE2EDuration="2m5.752562246s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:37.717965504 +0000 UTC m=+151.023802236" watchObservedRunningTime="2026-02-18 00:36:37.752562246 +0000 UTC m=+151.058398978" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.775791 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pjmlv" podStartSLOduration=124.775772437 podStartE2EDuration="2m4.775772437s" podCreationTimestamp="2026-02-18 00:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:37.773261486 +0000 UTC m=+151.079098218" watchObservedRunningTime="2026-02-18 00:36:37.775772437 +0000 UTC m=+151.081609169" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.812266 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:37 crc kubenswrapper[4858]: E0218 00:36:37.812559 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:38.312548693 +0000 UTC m=+151.618385415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.814653 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-jnlh5" podStartSLOduration=7.814628274 podStartE2EDuration="7.814628274s" podCreationTimestamp="2026-02-18 00:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:37.808427371 +0000 UTC m=+151.114264103" watchObservedRunningTime="2026-02-18 00:36:37.814628274 +0000 UTC m=+151.120465026" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.864421 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9dqf6" podStartSLOduration=124.864406809 podStartE2EDuration="2m4.864406809s" podCreationTimestamp="2026-02-18 00:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:37.83275423 +0000 UTC m=+151.138590962" watchObservedRunningTime="2026-02-18 00:36:37.864406809 +0000 UTC m=+151.170243541" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.866080 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-smdtb" podStartSLOduration=124.86607349 podStartE2EDuration="2m4.86607349s" podCreationTimestamp="2026-02-18 00:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:37.864780918 +0000 UTC m=+151.170617650" watchObservedRunningTime="2026-02-18 00:36:37.86607349 +0000 UTC m=+151.171910222" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.899158 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zk6hd" podStartSLOduration=124.899140843 podStartE2EDuration="2m4.899140843s" podCreationTimestamp="2026-02-18 00:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:37.898193031 +0000 UTC m=+151.204029763" watchObservedRunningTime="2026-02-18 00:36:37.899140843 +0000 UTC m=+151.204977575" Feb 18 00:36:37 crc kubenswrapper[4858]: I0218 00:36:37.914886 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:37 crc kubenswrapper[4858]: E0218 00:36:37.915204 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:38.415192539 +0000 UTC m=+151.721029271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:37.999735 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kp8d2" podStartSLOduration=124.999719349 podStartE2EDuration="2m4.999719349s" podCreationTimestamp="2026-02-18 00:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:37.998338405 +0000 UTC m=+151.304175137" watchObservedRunningTime="2026-02-18 00:36:37.999719349 +0000 UTC m=+151.305556081" Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.020118 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:38 crc kubenswrapper[4858]: E0218 00:36:38.020405 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:38.520390878 +0000 UTC m=+151.826227610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.121420 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:38 crc kubenswrapper[4858]: E0218 00:36:38.121782 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:38.621763653 +0000 UTC m=+151.927600385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.223054 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:38 crc kubenswrapper[4858]: E0218 00:36:38.223324 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:38.723311363 +0000 UTC m=+152.029148095 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.323836 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:38 crc kubenswrapper[4858]: E0218 00:36:38.324118 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:38.824103194 +0000 UTC m=+152.129939926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.408242 4858 generic.go:334] "Generic (PLEG): container finished" podID="a5bd9f27-973a-4ec3-91b8-87c2c20c6c34" containerID="0108ee457bde4c4b8d70b9b80e5bfd9393784b012bca0065d0eb4c799f9404e3" exitCode=0 Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.408299 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-dtdsw" event={"ID":"a5bd9f27-973a-4ec3-91b8-87c2c20c6c34","Type":"ContainerDied","Data":"0108ee457bde4c4b8d70b9b80e5bfd9393784b012bca0065d0eb4c799f9404e3"} Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.412912 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" event={"ID":"a4447453-79a5-4008-89ec-add924803b82","Type":"ContainerStarted","Data":"8563fc7e5bcb93e34ab4d63b43af28b7ae2a2ef60e0772773fa1a913b9760dca"} Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.414926 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8987" event={"ID":"96fed31a-2574-4ee1-9781-f4cfd1f9c68b","Type":"ContainerStarted","Data":"f29d4921d659c0d4ce7337d964a019e722f81f7d1204bc28a5896fb8c2208b03"} Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.415397 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8987" Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.417127 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" event={"ID":"64aaf596-bd11-435d-97ae-0c02f0f93c9f","Type":"ContainerStarted","Data":"d7f49286d84597e3b1ff38a66b2deafa8d5cc0aa56db26744b243e78b44bc710"} Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.417154 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" event={"ID":"64aaf596-bd11-435d-97ae-0c02f0f93c9f","Type":"ContainerStarted","Data":"3d9f3a2d2bf434c0fa24b4bc969f3d2e1e67328799b9a939beadaeb12db4cef7"} Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.421465 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" event={"ID":"7cc6c0de-0fa4-4366-b66d-7e8753c27f9f","Type":"ContainerStarted","Data":"fedc65dfa83ce932e298d54f21cc4a85a30d5c5966f9cdcf5e8f23841b3593f9"} Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.421533 4858 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mfftk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.421570 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mfftk" podUID="693a6651-227a-4a62-85df-4a7e667c3daf" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.424547 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-6djsl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.424583 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6djsl" podUID="b132ff06-2a28-42df-b43b-f923a76b4cca" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.425800 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:38 crc kubenswrapper[4858]: E0218 00:36:38.426089 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:38.926078554 +0000 UTC m=+152.231915286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.429188 4858 patch_prober.go:28] interesting pod/router-default-5444994796-mxwrb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:36:38 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 18 00:36:38 crc kubenswrapper[4858]: [+]process-running ok Feb 18 00:36:38 crc kubenswrapper[4858]: healthz check failed Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.429254 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mxwrb" podUID="afbe6075-e81f-464a-bfb5-7e97510ee945" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.444625 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9dqf6" Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.491949 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" podStartSLOduration=125.491927445 podStartE2EDuration="2m5.491927445s" podCreationTimestamp="2026-02-18 00:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:38.464820977 +0000 UTC m=+151.770657709" watchObservedRunningTime="2026-02-18 00:36:38.491927445 +0000 UTC m=+151.797764177" Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.527032 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:38 crc kubenswrapper[4858]: E0218 00:36:38.527224 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:39.027199353 +0000 UTC m=+152.333036085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.540988 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:38 crc kubenswrapper[4858]: E0218 00:36:38.541642 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:39.041629408 +0000 UTC m=+152.347466140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.544228 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8987" podStartSLOduration=126.544211452 podStartE2EDuration="2m6.544211452s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:38.498807284 +0000 UTC m=+151.804644016" watchObservedRunningTime="2026-02-18 00:36:38.544211452 +0000 UTC m=+151.850048184" Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.546609 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" podStartSLOduration=126.546595021 podStartE2EDuration="2m6.546595021s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:38.544047118 +0000 UTC m=+151.849883850" watchObservedRunningTime="2026-02-18 00:36:38.546595021 +0000 UTC m=+151.852431753" Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.575569 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-gtw2k" Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.643123 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:38 crc kubenswrapper[4858]: E0218 00:36:38.643284 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:39.14325373 +0000 UTC m=+152.449090462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.643387 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:38 crc kubenswrapper[4858]: E0218 00:36:38.643684 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:39.14367241 +0000 UTC m=+152.449509142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.744720 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:38 crc kubenswrapper[4858]: E0218 00:36:38.744929 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:39.244899332 +0000 UTC m=+152.550736064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.744998 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:38 crc kubenswrapper[4858]: E0218 00:36:38.745297 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:39.245286271 +0000 UTC m=+152.551123003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.846267 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:38 crc kubenswrapper[4858]: E0218 00:36:38.846476 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:39.346448971 +0000 UTC m=+152.652285703 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.846545 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:38 crc kubenswrapper[4858]: E0218 00:36:38.846878 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:39.346865961 +0000 UTC m=+152.652702693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.948072 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:38 crc kubenswrapper[4858]: E0218 00:36:38.948269 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:39.448235027 +0000 UTC m=+152.754071759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:38 crc kubenswrapper[4858]: I0218 00:36:38.948639 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:38 crc kubenswrapper[4858]: E0218 00:36:38.948925 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:39.448913773 +0000 UTC m=+152.754750505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.049742 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:39 crc kubenswrapper[4858]: E0218 00:36:39.049936 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:39.549912029 +0000 UTC m=+152.855748761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.050074 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:39 crc kubenswrapper[4858]: E0218 00:36:39.050505 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:39.550480854 +0000 UTC m=+152.856317586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.092284 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b772q" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.151665 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:39 crc kubenswrapper[4858]: E0218 00:36:39.151765 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:39.651750676 +0000 UTC m=+152.957587408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.151956 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:39 crc kubenswrapper[4858]: E0218 00:36:39.152206 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:39.652199386 +0000 UTC m=+152.958036118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.215640 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cdx8z"] Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.216512 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdx8z" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.220642 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.234293 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cdx8z"] Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.252891 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:39 crc kubenswrapper[4858]: E0218 00:36:39.253010 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:39.752986117 +0000 UTC m=+153.058822839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.253113 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:39 crc kubenswrapper[4858]: E0218 00:36:39.253404 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:39.753392618 +0000 UTC m=+153.059229350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.354367 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.354752 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpzbp\" (UniqueName: \"kubernetes.io/projected/10004d92-1526-4fef-a0c1-dbd5077a46a0-kube-api-access-gpzbp\") pod \"certified-operators-cdx8z\" (UID: \"10004d92-1526-4fef-a0c1-dbd5077a46a0\") " pod="openshift-marketplace/certified-operators-cdx8z" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.354810 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10004d92-1526-4fef-a0c1-dbd5077a46a0-utilities\") pod \"certified-operators-cdx8z\" (UID: \"10004d92-1526-4fef-a0c1-dbd5077a46a0\") " pod="openshift-marketplace/certified-operators-cdx8z" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.354867 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10004d92-1526-4fef-a0c1-dbd5077a46a0-catalog-content\") pod \"certified-operators-cdx8z\" (UID: \"10004d92-1526-4fef-a0c1-dbd5077a46a0\") " pod="openshift-marketplace/certified-operators-cdx8z" Feb 18 00:36:39 crc kubenswrapper[4858]: E0218 00:36:39.355005 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:39.854987468 +0000 UTC m=+153.160824200 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.404855 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hvp9z"] Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.407280 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvp9z" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.413647 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.414013 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hvp9z"] Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.432114 4858 patch_prober.go:28] interesting pod/router-default-5444994796-mxwrb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:36:39 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 18 00:36:39 crc kubenswrapper[4858]: [+]process-running ok Feb 18 00:36:39 crc kubenswrapper[4858]: healthz check failed Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.432196 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mxwrb" podUID="afbe6075-e81f-464a-bfb5-7e97510ee945" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.450280 4858 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.468317 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" event={"ID":"7cc6c0de-0fa4-4366-b66d-7e8753c27f9f","Type":"ContainerStarted","Data":"4b70a6e7a8fd7cb6416172c80a3aad283c85cfb16accbd509f2202030398d3b1"} Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.468357 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" event={"ID":"7cc6c0de-0fa4-4366-b66d-7e8753c27f9f","Type":"ContainerStarted","Data":"2791cd2af5a59f0a6d284fda41e2acb1974b859918cb34ac2e6ba1c2c84f7668"} Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.468722 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f2604ed-931f-4fc1-96dc-cace175d2905-utilities\") pod \"community-operators-hvp9z\" (UID: \"4f2604ed-931f-4fc1-96dc-cace175d2905\") " pod="openshift-marketplace/community-operators-hvp9z" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.468751 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10004d92-1526-4fef-a0c1-dbd5077a46a0-catalog-content\") pod \"certified-operators-cdx8z\" (UID: \"10004d92-1526-4fef-a0c1-dbd5077a46a0\") " pod="openshift-marketplace/certified-operators-cdx8z" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.468794 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpzbp\" (UniqueName: \"kubernetes.io/projected/10004d92-1526-4fef-a0c1-dbd5077a46a0-kube-api-access-gpzbp\") pod \"certified-operators-cdx8z\" (UID: \"10004d92-1526-4fef-a0c1-dbd5077a46a0\") " pod="openshift-marketplace/certified-operators-cdx8z" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.468814 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f2604ed-931f-4fc1-96dc-cace175d2905-catalog-content\") pod \"community-operators-hvp9z\" (UID: \"4f2604ed-931f-4fc1-96dc-cace175d2905\") " pod="openshift-marketplace/community-operators-hvp9z" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.468835 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhj5l\" (UniqueName: \"kubernetes.io/projected/4f2604ed-931f-4fc1-96dc-cace175d2905-kube-api-access-bhj5l\") pod \"community-operators-hvp9z\" (UID: \"4f2604ed-931f-4fc1-96dc-cace175d2905\") " pod="openshift-marketplace/community-operators-hvp9z" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.468857 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10004d92-1526-4fef-a0c1-dbd5077a46a0-utilities\") pod \"certified-operators-cdx8z\" (UID: \"10004d92-1526-4fef-a0c1-dbd5077a46a0\") " pod="openshift-marketplace/certified-operators-cdx8z" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.468887 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:39 crc kubenswrapper[4858]: E0218 00:36:39.469102 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:36:39.969090937 +0000 UTC m=+153.274927669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qf4s8" (UID: "329e20a2-8966-48c0-8300-bc996770880d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.469627 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10004d92-1526-4fef-a0c1-dbd5077a46a0-catalog-content\") pod \"certified-operators-cdx8z\" (UID: \"10004d92-1526-4fef-a0c1-dbd5077a46a0\") " pod="openshift-marketplace/certified-operators-cdx8z" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.470032 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10004d92-1526-4fef-a0c1-dbd5077a46a0-utilities\") pod \"certified-operators-cdx8z\" (UID: \"10004d92-1526-4fef-a0c1-dbd5077a46a0\") " pod="openshift-marketplace/certified-operators-cdx8z" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.501213 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mfftk" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.502538 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpzbp\" (UniqueName: \"kubernetes.io/projected/10004d92-1526-4fef-a0c1-dbd5077a46a0-kube-api-access-gpzbp\") pod \"certified-operators-cdx8z\" (UID: \"10004d92-1526-4fef-a0c1-dbd5077a46a0\") " pod="openshift-marketplace/certified-operators-cdx8z" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.533134 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdx8z" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.572945 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.573193 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f2604ed-931f-4fc1-96dc-cace175d2905-catalog-content\") pod \"community-operators-hvp9z\" (UID: \"4f2604ed-931f-4fc1-96dc-cace175d2905\") " pod="openshift-marketplace/community-operators-hvp9z" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.573267 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhj5l\" (UniqueName: \"kubernetes.io/projected/4f2604ed-931f-4fc1-96dc-cace175d2905-kube-api-access-bhj5l\") pod \"community-operators-hvp9z\" (UID: \"4f2604ed-931f-4fc1-96dc-cace175d2905\") " pod="openshift-marketplace/community-operators-hvp9z" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.573407 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f2604ed-931f-4fc1-96dc-cace175d2905-utilities\") pod \"community-operators-hvp9z\" (UID: \"4f2604ed-931f-4fc1-96dc-cace175d2905\") " pod="openshift-marketplace/community-operators-hvp9z" Feb 18 00:36:39 crc kubenswrapper[4858]: E0218 00:36:39.574726 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:36:40.074704416 +0000 UTC m=+153.380541148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.575738 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f2604ed-931f-4fc1-96dc-cace175d2905-utilities\") pod \"community-operators-hvp9z\" (UID: \"4f2604ed-931f-4fc1-96dc-cace175d2905\") " pod="openshift-marketplace/community-operators-hvp9z" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.577474 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f2604ed-931f-4fc1-96dc-cace175d2905-catalog-content\") pod \"community-operators-hvp9z\" (UID: \"4f2604ed-931f-4fc1-96dc-cace175d2905\") " pod="openshift-marketplace/community-operators-hvp9z" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.595972 4858 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-18T00:36:39.450313844Z","Handler":null,"Name":""} Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.598346 4858 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.598374 4858 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.598416 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k77hk"] Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.600623 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k77hk" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.600719 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhj5l\" (UniqueName: \"kubernetes.io/projected/4f2604ed-931f-4fc1-96dc-cace175d2905-kube-api-access-bhj5l\") pod \"community-operators-hvp9z\" (UID: \"4f2604ed-931f-4fc1-96dc-cace175d2905\") " pod="openshift-marketplace/community-operators-hvp9z" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.606566 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k77hk"] Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.675245 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.675291 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9999291e-e811-4c10-8720-73bfeb32c3cf-catalog-content\") pod \"certified-operators-k77hk\" (UID: \"9999291e-e811-4c10-8720-73bfeb32c3cf\") " pod="openshift-marketplace/certified-operators-k77hk" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.675330 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr9m4\" (UniqueName: \"kubernetes.io/projected/9999291e-e811-4c10-8720-73bfeb32c3cf-kube-api-access-nr9m4\") pod \"certified-operators-k77hk\" (UID: \"9999291e-e811-4c10-8720-73bfeb32c3cf\") " pod="openshift-marketplace/certified-operators-k77hk" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.675378 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9999291e-e811-4c10-8720-73bfeb32c3cf-utilities\") pod \"certified-operators-k77hk\" (UID: \"9999291e-e811-4c10-8720-73bfeb32c3cf\") " pod="openshift-marketplace/certified-operators-k77hk" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.679362 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.679400 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.724677 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvp9z" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.755162 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qf4s8\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.782321 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.782552 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9999291e-e811-4c10-8720-73bfeb32c3cf-utilities\") pod \"certified-operators-k77hk\" (UID: \"9999291e-e811-4c10-8720-73bfeb32c3cf\") " pod="openshift-marketplace/certified-operators-k77hk" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.782601 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9999291e-e811-4c10-8720-73bfeb32c3cf-catalog-content\") pod \"certified-operators-k77hk\" (UID: \"9999291e-e811-4c10-8720-73bfeb32c3cf\") " pod="openshift-marketplace/certified-operators-k77hk" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.782635 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr9m4\" (UniqueName: \"kubernetes.io/projected/9999291e-e811-4c10-8720-73bfeb32c3cf-kube-api-access-nr9m4\") pod \"certified-operators-k77hk\" (UID: \"9999291e-e811-4c10-8720-73bfeb32c3cf\") " pod="openshift-marketplace/certified-operators-k77hk" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.783317 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9999291e-e811-4c10-8720-73bfeb32c3cf-utilities\") pod \"certified-operators-k77hk\" (UID: \"9999291e-e811-4c10-8720-73bfeb32c3cf\") " pod="openshift-marketplace/certified-operators-k77hk" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.783549 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9999291e-e811-4c10-8720-73bfeb32c3cf-catalog-content\") pod \"certified-operators-k77hk\" (UID: \"9999291e-e811-4c10-8720-73bfeb32c3cf\") " pod="openshift-marketplace/certified-operators-k77hk" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.797298 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q49pf"] Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.798178 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q49pf" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.801296 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr9m4\" (UniqueName: \"kubernetes.io/projected/9999291e-e811-4c10-8720-73bfeb32c3cf-kube-api-access-nr9m4\") pod \"certified-operators-k77hk\" (UID: \"9999291e-e811-4c10-8720-73bfeb32c3cf\") " pod="openshift-marketplace/certified-operators-k77hk" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.807291 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q49pf"] Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.826274 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-dtdsw" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.841884 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.883860 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5bd9f27-973a-4ec3-91b8-87c2c20c6c34-config-volume\") pod \"a5bd9f27-973a-4ec3-91b8-87c2c20c6c34\" (UID: \"a5bd9f27-973a-4ec3-91b8-87c2c20c6c34\") " Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.883918 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5496t\" (UniqueName: \"kubernetes.io/projected/a5bd9f27-973a-4ec3-91b8-87c2c20c6c34-kube-api-access-5496t\") pod \"a5bd9f27-973a-4ec3-91b8-87c2c20c6c34\" (UID: \"a5bd9f27-973a-4ec3-91b8-87c2c20c6c34\") " Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.883953 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5bd9f27-973a-4ec3-91b8-87c2c20c6c34-secret-volume\") pod \"a5bd9f27-973a-4ec3-91b8-87c2c20c6c34\" (UID: \"a5bd9f27-973a-4ec3-91b8-87c2c20c6c34\") " Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.884136 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7244cf66-b72d-4f2d-a463-89e4e8e37b2c-utilities\") pod \"community-operators-q49pf\" (UID: \"7244cf66-b72d-4f2d-a463-89e4e8e37b2c\") " pod="openshift-marketplace/community-operators-q49pf" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.884158 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snh7b\" (UniqueName: \"kubernetes.io/projected/7244cf66-b72d-4f2d-a463-89e4e8e37b2c-kube-api-access-snh7b\") pod \"community-operators-q49pf\" (UID: \"7244cf66-b72d-4f2d-a463-89e4e8e37b2c\") " pod="openshift-marketplace/community-operators-q49pf" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.884219 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7244cf66-b72d-4f2d-a463-89e4e8e37b2c-catalog-content\") pod \"community-operators-q49pf\" (UID: \"7244cf66-b72d-4f2d-a463-89e4e8e37b2c\") " pod="openshift-marketplace/community-operators-q49pf" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.884334 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5bd9f27-973a-4ec3-91b8-87c2c20c6c34-config-volume" (OuterVolumeSpecName: "config-volume") pod "a5bd9f27-973a-4ec3-91b8-87c2c20c6c34" (UID: "a5bd9f27-973a-4ec3-91b8-87c2c20c6c34"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.913687 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5bd9f27-973a-4ec3-91b8-87c2c20c6c34-kube-api-access-5496t" (OuterVolumeSpecName: "kube-api-access-5496t") pod "a5bd9f27-973a-4ec3-91b8-87c2c20c6c34" (UID: "a5bd9f27-973a-4ec3-91b8-87c2c20c6c34"). InnerVolumeSpecName "kube-api-access-5496t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.914676 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5bd9f27-973a-4ec3-91b8-87c2c20c6c34-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a5bd9f27-973a-4ec3-91b8-87c2c20c6c34" (UID: "a5bd9f27-973a-4ec3-91b8-87c2c20c6c34"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.920481 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k77hk" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.941567 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cdx8z"] Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.974875 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.986313 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7244cf66-b72d-4f2d-a463-89e4e8e37b2c-utilities\") pod \"community-operators-q49pf\" (UID: \"7244cf66-b72d-4f2d-a463-89e4e8e37b2c\") " pod="openshift-marketplace/community-operators-q49pf" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.986364 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snh7b\" (UniqueName: \"kubernetes.io/projected/7244cf66-b72d-4f2d-a463-89e4e8e37b2c-kube-api-access-snh7b\") pod \"community-operators-q49pf\" (UID: \"7244cf66-b72d-4f2d-a463-89e4e8e37b2c\") " pod="openshift-marketplace/community-operators-q49pf" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.986452 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7244cf66-b72d-4f2d-a463-89e4e8e37b2c-catalog-content\") pod \"community-operators-q49pf\" (UID: \"7244cf66-b72d-4f2d-a463-89e4e8e37b2c\") " pod="openshift-marketplace/community-operators-q49pf" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.986484 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5bd9f27-973a-4ec3-91b8-87c2c20c6c34-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.986520 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5496t\" (UniqueName: \"kubernetes.io/projected/a5bd9f27-973a-4ec3-91b8-87c2c20c6c34-kube-api-access-5496t\") on node \"crc\" DevicePath \"\"" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.986530 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5bd9f27-973a-4ec3-91b8-87c2c20c6c34-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.987164 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7244cf66-b72d-4f2d-a463-89e4e8e37b2c-utilities\") pod \"community-operators-q49pf\" (UID: \"7244cf66-b72d-4f2d-a463-89e4e8e37b2c\") " pod="openshift-marketplace/community-operators-q49pf" Feb 18 00:36:39 crc kubenswrapper[4858]: I0218 00:36:39.987260 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7244cf66-b72d-4f2d-a463-89e4e8e37b2c-catalog-content\") pod \"community-operators-q49pf\" (UID: \"7244cf66-b72d-4f2d-a463-89e4e8e37b2c\") " pod="openshift-marketplace/community-operators-q49pf" Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.006024 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snh7b\" (UniqueName: \"kubernetes.io/projected/7244cf66-b72d-4f2d-a463-89e4e8e37b2c-kube-api-access-snh7b\") pod \"community-operators-q49pf\" (UID: \"7244cf66-b72d-4f2d-a463-89e4e8e37b2c\") " pod="openshift-marketplace/community-operators-q49pf" Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.040097 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hvp9z"] Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.131888 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q49pf" Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.224967 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k77hk"] Feb 18 00:36:40 crc kubenswrapper[4858]: W0218 00:36:40.259710 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9999291e_e811_4c10_8720_73bfeb32c3cf.slice/crio-3ea13f8d029672204f4ee11d5e094823b3a360b896a64186c4ba762408d8c48e WatchSource:0}: Error finding container 3ea13f8d029672204f4ee11d5e094823b3a360b896a64186c4ba762408d8c48e: Status 404 returned error can't find the container with id 3ea13f8d029672204f4ee11d5e094823b3a360b896a64186c4ba762408d8c48e Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.392624 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qf4s8"] Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.425182 4858 patch_prober.go:28] interesting pod/router-default-5444994796-mxwrb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:36:40 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 18 00:36:40 crc kubenswrapper[4858]: [+]process-running ok Feb 18 00:36:40 crc kubenswrapper[4858]: healthz check failed Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.425248 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mxwrb" podUID="afbe6075-e81f-464a-bfb5-7e97510ee945" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.435447 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q49pf"] Feb 18 00:36:40 crc kubenswrapper[4858]: W0218 00:36:40.453910 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7244cf66_b72d_4f2d_a463_89e4e8e37b2c.slice/crio-8590f7f6632618e91c84bdf67dcc20921595dea1f1b2c759f0d302d597c945da WatchSource:0}: Error finding container 8590f7f6632618e91c84bdf67dcc20921595dea1f1b2c759f0d302d597c945da: Status 404 returned error can't find the container with id 8590f7f6632618e91c84bdf67dcc20921595dea1f1b2c759f0d302d597c945da Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.474880 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" event={"ID":"329e20a2-8966-48c0-8300-bc996770880d","Type":"ContainerStarted","Data":"c305c72451f30f5f83399690f166a3b55e7a900213068f4b37698d51721eb4bd"} Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.498829 4858 generic.go:334] "Generic (PLEG): container finished" podID="4f2604ed-931f-4fc1-96dc-cace175d2905" containerID="d7326bda7e3795b6d195b678aa0f791d1b506d09c48ca74deaa4475f8caf4882" exitCode=0 Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.498904 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvp9z" event={"ID":"4f2604ed-931f-4fc1-96dc-cace175d2905","Type":"ContainerDied","Data":"d7326bda7e3795b6d195b678aa0f791d1b506d09c48ca74deaa4475f8caf4882"} Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.498928 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvp9z" event={"ID":"4f2604ed-931f-4fc1-96dc-cace175d2905","Type":"ContainerStarted","Data":"e462976b0b93a35d2e3928d4a4ad1e8a5d5c1681945550c31642266ae66d15c8"} Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.503901 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.505973 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q49pf" event={"ID":"7244cf66-b72d-4f2d-a463-89e4e8e37b2c","Type":"ContainerStarted","Data":"8590f7f6632618e91c84bdf67dcc20921595dea1f1b2c759f0d302d597c945da"} Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.512903 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" event={"ID":"7cc6c0de-0fa4-4366-b66d-7e8753c27f9f","Type":"ContainerStarted","Data":"1db966797936f73dc221ba60bd0afcec5d1dee7a53062270b169997d07791a07"} Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.515024 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-dtdsw" Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.515375 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-dtdsw" event={"ID":"a5bd9f27-973a-4ec3-91b8-87c2c20c6c34","Type":"ContainerDied","Data":"c1eb5c9b5559aacddc6a85452f9ae65d2c9c946ff7c5e389b7175109166e8512"} Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.515415 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1eb5c9b5559aacddc6a85452f9ae65d2c9c946ff7c5e389b7175109166e8512" Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.522614 4858 generic.go:334] "Generic (PLEG): container finished" podID="9999291e-e811-4c10-8720-73bfeb32c3cf" containerID="e055ca5f64ecdd9f0dee82ffeab2919e8cacabb58b6f8465ba15eb66d8a0fd97" exitCode=0 Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.522918 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k77hk" event={"ID":"9999291e-e811-4c10-8720-73bfeb32c3cf","Type":"ContainerDied","Data":"e055ca5f64ecdd9f0dee82ffeab2919e8cacabb58b6f8465ba15eb66d8a0fd97"} Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.522970 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k77hk" event={"ID":"9999291e-e811-4c10-8720-73bfeb32c3cf","Type":"ContainerStarted","Data":"3ea13f8d029672204f4ee11d5e094823b3a360b896a64186c4ba762408d8c48e"} Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.527739 4858 generic.go:334] "Generic (PLEG): container finished" podID="10004d92-1526-4fef-a0c1-dbd5077a46a0" containerID="562c4b478755425a5e41a92550d5d9a2e09ada6df63d63243539583de8c504b4" exitCode=0 Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.528313 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdx8z" event={"ID":"10004d92-1526-4fef-a0c1-dbd5077a46a0","Type":"ContainerDied","Data":"562c4b478755425a5e41a92550d5d9a2e09ada6df63d63243539583de8c504b4"} Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.528374 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdx8z" event={"ID":"10004d92-1526-4fef-a0c1-dbd5077a46a0","Type":"ContainerStarted","Data":"22ed3db9108330673ac7afe94858a22a800cc2d9df1fa20715639763f38df855"} Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.533837 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8987" Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.536509 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" podStartSLOduration=10.53647775 podStartE2EDuration="10.53647775s" podCreationTimestamp="2026-02-18 00:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:40.535928397 +0000 UTC m=+153.841765129" watchObservedRunningTime="2026-02-18 00:36:40.53647775 +0000 UTC m=+153.842314482" Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.668152 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 00:36:40 crc kubenswrapper[4858]: E0218 00:36:40.668344 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5bd9f27-973a-4ec3-91b8-87c2c20c6c34" containerName="collect-profiles" Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.668360 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5bd9f27-973a-4ec3-91b8-87c2c20c6c34" containerName="collect-profiles" Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.668468 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5bd9f27-973a-4ec3-91b8-87c2c20c6c34" containerName="collect-profiles" Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.669160 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.674400 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.674855 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.682180 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.801582 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2468fb90-c123-4c1a-8483-5af234b09c07-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2468fb90-c123-4c1a-8483-5af234b09c07\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.801639 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2468fb90-c123-4c1a-8483-5af234b09c07-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2468fb90-c123-4c1a-8483-5af234b09c07\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.903281 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2468fb90-c123-4c1a-8483-5af234b09c07-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2468fb90-c123-4c1a-8483-5af234b09c07\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.903567 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2468fb90-c123-4c1a-8483-5af234b09c07-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2468fb90-c123-4c1a-8483-5af234b09c07\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.903644 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2468fb90-c123-4c1a-8483-5af234b09c07-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2468fb90-c123-4c1a-8483-5af234b09c07\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.940805 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2468fb90-c123-4c1a-8483-5af234b09c07-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2468fb90-c123-4c1a-8483-5af234b09c07\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 00:36:40 crc kubenswrapper[4858]: I0218 00:36:40.991862 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.186587 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.211148 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sx858"] Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.212293 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sx858" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.214577 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.229803 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sx858"] Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.311019 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-588js\" (UniqueName: \"kubernetes.io/projected/b398b3cc-afb3-4dad-bcf7-f9b2c9278be6-kube-api-access-588js\") pod \"redhat-marketplace-sx858\" (UID: \"b398b3cc-afb3-4dad-bcf7-f9b2c9278be6\") " pod="openshift-marketplace/redhat-marketplace-sx858" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.311115 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b398b3cc-afb3-4dad-bcf7-f9b2c9278be6-utilities\") pod \"redhat-marketplace-sx858\" (UID: \"b398b3cc-afb3-4dad-bcf7-f9b2c9278be6\") " pod="openshift-marketplace/redhat-marketplace-sx858" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.311152 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b398b3cc-afb3-4dad-bcf7-f9b2c9278be6-catalog-content\") pod \"redhat-marketplace-sx858\" (UID: \"b398b3cc-afb3-4dad-bcf7-f9b2c9278be6\") " pod="openshift-marketplace/redhat-marketplace-sx858" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.412168 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b398b3cc-afb3-4dad-bcf7-f9b2c9278be6-utilities\") pod \"redhat-marketplace-sx858\" (UID: \"b398b3cc-afb3-4dad-bcf7-f9b2c9278be6\") " pod="openshift-marketplace/redhat-marketplace-sx858" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.412223 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b398b3cc-afb3-4dad-bcf7-f9b2c9278be6-catalog-content\") pod \"redhat-marketplace-sx858\" (UID: \"b398b3cc-afb3-4dad-bcf7-f9b2c9278be6\") " pod="openshift-marketplace/redhat-marketplace-sx858" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.412250 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-588js\" (UniqueName: \"kubernetes.io/projected/b398b3cc-afb3-4dad-bcf7-f9b2c9278be6-kube-api-access-588js\") pod \"redhat-marketplace-sx858\" (UID: \"b398b3cc-afb3-4dad-bcf7-f9b2c9278be6\") " pod="openshift-marketplace/redhat-marketplace-sx858" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.412688 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b398b3cc-afb3-4dad-bcf7-f9b2c9278be6-utilities\") pod \"redhat-marketplace-sx858\" (UID: \"b398b3cc-afb3-4dad-bcf7-f9b2c9278be6\") " pod="openshift-marketplace/redhat-marketplace-sx858" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.412759 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b398b3cc-afb3-4dad-bcf7-f9b2c9278be6-catalog-content\") pod \"redhat-marketplace-sx858\" (UID: \"b398b3cc-afb3-4dad-bcf7-f9b2c9278be6\") " pod="openshift-marketplace/redhat-marketplace-sx858" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.424752 4858 patch_prober.go:28] interesting pod/router-default-5444994796-mxwrb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:36:41 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 18 00:36:41 crc kubenswrapper[4858]: [+]process-running ok Feb 18 00:36:41 crc kubenswrapper[4858]: healthz check failed Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.424814 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mxwrb" podUID="afbe6075-e81f-464a-bfb5-7e97510ee945" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.432031 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-588js\" (UniqueName: \"kubernetes.io/projected/b398b3cc-afb3-4dad-bcf7-f9b2c9278be6-kube-api-access-588js\") pod \"redhat-marketplace-sx858\" (UID: \"b398b3cc-afb3-4dad-bcf7-f9b2c9278be6\") " pod="openshift-marketplace/redhat-marketplace-sx858" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.436703 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.532647 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sx858" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.533762 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2468fb90-c123-4c1a-8483-5af234b09c07","Type":"ContainerStarted","Data":"cf131fa43f3ef82e294bade41b59c606984ff1484bb5133ae1fecdb181928f15"} Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.535048 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" event={"ID":"329e20a2-8966-48c0-8300-bc996770880d","Type":"ContainerStarted","Data":"36a3df7e07c741e73366ffd3fa0cd0f165970a5c334c841d30c0867f8fb91ff8"} Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.535801 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.537121 4858 generic.go:334] "Generic (PLEG): container finished" podID="7244cf66-b72d-4f2d-a463-89e4e8e37b2c" containerID="1aa141a313f29a89a1b0860fd041b4f62dc252de8a84434b6a67663374ba474f" exitCode=0 Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.538152 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q49pf" event={"ID":"7244cf66-b72d-4f2d-a463-89e4e8e37b2c","Type":"ContainerDied","Data":"1aa141a313f29a89a1b0860fd041b4f62dc252de8a84434b6a67663374ba474f"} Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.554460 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" podStartSLOduration=129.554439996 podStartE2EDuration="2m9.554439996s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:36:41.549550096 +0000 UTC m=+154.855386858" watchObservedRunningTime="2026-02-18 00:36:41.554439996 +0000 UTC m=+154.860276738" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.591290 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zvz6w"] Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.592399 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvz6w" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.602735 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvz6w"] Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.716912 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/054e4a88-251f-4406-bbed-52397e7698b4-utilities\") pod \"redhat-marketplace-zvz6w\" (UID: \"054e4a88-251f-4406-bbed-52397e7698b4\") " pod="openshift-marketplace/redhat-marketplace-zvz6w" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.717433 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdlgw\" (UniqueName: \"kubernetes.io/projected/054e4a88-251f-4406-bbed-52397e7698b4-kube-api-access-xdlgw\") pod \"redhat-marketplace-zvz6w\" (UID: \"054e4a88-251f-4406-bbed-52397e7698b4\") " pod="openshift-marketplace/redhat-marketplace-zvz6w" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.718045 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/054e4a88-251f-4406-bbed-52397e7698b4-catalog-content\") pod \"redhat-marketplace-zvz6w\" (UID: \"054e4a88-251f-4406-bbed-52397e7698b4\") " pod="openshift-marketplace/redhat-marketplace-zvz6w" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.731968 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sx858"] Feb 18 00:36:41 crc kubenswrapper[4858]: W0218 00:36:41.749655 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb398b3cc_afb3_4dad_bcf7_f9b2c9278be6.slice/crio-f6bc62812a1d8d3bd87963011693582adaddcca4e2dcccc9645d1dcdf3fb4ea5 WatchSource:0}: Error finding container f6bc62812a1d8d3bd87963011693582adaddcca4e2dcccc9645d1dcdf3fb4ea5: Status 404 returned error can't find the container with id f6bc62812a1d8d3bd87963011693582adaddcca4e2dcccc9645d1dcdf3fb4ea5 Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.818704 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/054e4a88-251f-4406-bbed-52397e7698b4-catalog-content\") pod \"redhat-marketplace-zvz6w\" (UID: \"054e4a88-251f-4406-bbed-52397e7698b4\") " pod="openshift-marketplace/redhat-marketplace-zvz6w" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.818747 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/054e4a88-251f-4406-bbed-52397e7698b4-utilities\") pod \"redhat-marketplace-zvz6w\" (UID: \"054e4a88-251f-4406-bbed-52397e7698b4\") " pod="openshift-marketplace/redhat-marketplace-zvz6w" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.818771 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdlgw\" (UniqueName: \"kubernetes.io/projected/054e4a88-251f-4406-bbed-52397e7698b4-kube-api-access-xdlgw\") pod \"redhat-marketplace-zvz6w\" (UID: \"054e4a88-251f-4406-bbed-52397e7698b4\") " pod="openshift-marketplace/redhat-marketplace-zvz6w" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.819530 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/054e4a88-251f-4406-bbed-52397e7698b4-catalog-content\") pod \"redhat-marketplace-zvz6w\" (UID: \"054e4a88-251f-4406-bbed-52397e7698b4\") " pod="openshift-marketplace/redhat-marketplace-zvz6w" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.819555 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/054e4a88-251f-4406-bbed-52397e7698b4-utilities\") pod \"redhat-marketplace-zvz6w\" (UID: \"054e4a88-251f-4406-bbed-52397e7698b4\") " pod="openshift-marketplace/redhat-marketplace-zvz6w" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.834318 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.835232 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.837363 4858 patch_prober.go:28] interesting pod/console-f9d7485db-lpg4n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.837409 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-lpg4n" podUID="a82bb6ce-4801-417a-a4e2-93d1667999ee" containerName="console" probeResult="failure" output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.840989 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdlgw\" (UniqueName: \"kubernetes.io/projected/054e4a88-251f-4406-bbed-52397e7698b4-kube-api-access-xdlgw\") pod \"redhat-marketplace-zvz6w\" (UID: \"054e4a88-251f-4406-bbed-52397e7698b4\") " pod="openshift-marketplace/redhat-marketplace-zvz6w" Feb 18 00:36:41 crc kubenswrapper[4858]: I0218 00:36:41.908931 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvz6w" Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.075398 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvz6w"] Feb 18 00:36:42 crc kubenswrapper[4858]: W0218 00:36:42.103072 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod054e4a88_251f_4406_bbed_52397e7698b4.slice/crio-89047c9ffcbf1663cd19fdf471497fccf5fa245803dd78a6f0c89b66807284d8 WatchSource:0}: Error finding container 89047c9ffcbf1663cd19fdf471497fccf5fa245803dd78a6f0c89b66807284d8: Status 404 returned error can't find the container with id 89047c9ffcbf1663cd19fdf471497fccf5fa245803dd78a6f0c89b66807284d8 Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.426862 4858 patch_prober.go:28] interesting pod/router-default-5444994796-mxwrb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:36:42 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 18 00:36:42 crc kubenswrapper[4858]: [+]process-running ok Feb 18 00:36:42 crc kubenswrapper[4858]: healthz check failed Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.427070 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mxwrb" podUID="afbe6075-e81f-464a-bfb5-7e97510ee945" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.545877 4858 generic.go:334] "Generic (PLEG): container finished" podID="2468fb90-c123-4c1a-8483-5af234b09c07" containerID="8041cd638459fd0f34b3c1be38411577ed12708298797d6b330ed18110b56a48" exitCode=0 Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.545999 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2468fb90-c123-4c1a-8483-5af234b09c07","Type":"ContainerDied","Data":"8041cd638459fd0f34b3c1be38411577ed12708298797d6b330ed18110b56a48"} Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.547945 4858 generic.go:334] "Generic (PLEG): container finished" podID="054e4a88-251f-4406-bbed-52397e7698b4" containerID="b02eebf48fb9cee392d0c043cea1626c0b95bf592291adb262ffb28b56dda150" exitCode=0 Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.548483 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvz6w" event={"ID":"054e4a88-251f-4406-bbed-52397e7698b4","Type":"ContainerDied","Data":"b02eebf48fb9cee392d0c043cea1626c0b95bf592291adb262ffb28b56dda150"} Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.548526 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvz6w" event={"ID":"054e4a88-251f-4406-bbed-52397e7698b4","Type":"ContainerStarted","Data":"89047c9ffcbf1663cd19fdf471497fccf5fa245803dd78a6f0c89b66807284d8"} Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.550436 4858 generic.go:334] "Generic (PLEG): container finished" podID="b398b3cc-afb3-4dad-bcf7-f9b2c9278be6" containerID="343bed8231922f789d9f6b97e05101eadff3b1b6d2e999d0e0c66579c3cedb66" exitCode=0 Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.551061 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sx858" event={"ID":"b398b3cc-afb3-4dad-bcf7-f9b2c9278be6","Type":"ContainerDied","Data":"343bed8231922f789d9f6b97e05101eadff3b1b6d2e999d0e0c66579c3cedb66"} Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.551081 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sx858" event={"ID":"b398b3cc-afb3-4dad-bcf7-f9b2c9278be6","Type":"ContainerStarted","Data":"f6bc62812a1d8d3bd87963011693582adaddcca4e2dcccc9645d1dcdf3fb4ea5"} Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.592406 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qvmxz"] Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.593526 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvmxz" Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.597568 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.600893 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qvmxz"] Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.731070 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41a8b51a-55c4-476a-a895-6913c143f33a-utilities\") pod \"redhat-operators-qvmxz\" (UID: \"41a8b51a-55c4-476a-a895-6913c143f33a\") " pod="openshift-marketplace/redhat-operators-qvmxz" Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.731114 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s79p2\" (UniqueName: \"kubernetes.io/projected/41a8b51a-55c4-476a-a895-6913c143f33a-kube-api-access-s79p2\") pod \"redhat-operators-qvmxz\" (UID: \"41a8b51a-55c4-476a-a895-6913c143f33a\") " pod="openshift-marketplace/redhat-operators-qvmxz" Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.731383 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41a8b51a-55c4-476a-a895-6913c143f33a-catalog-content\") pod \"redhat-operators-qvmxz\" (UID: \"41a8b51a-55c4-476a-a895-6913c143f33a\") " pod="openshift-marketplace/redhat-operators-qvmxz" Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.833423 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41a8b51a-55c4-476a-a895-6913c143f33a-catalog-content\") pod \"redhat-operators-qvmxz\" (UID: \"41a8b51a-55c4-476a-a895-6913c143f33a\") " pod="openshift-marketplace/redhat-operators-qvmxz" Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.833604 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41a8b51a-55c4-476a-a895-6913c143f33a-utilities\") pod \"redhat-operators-qvmxz\" (UID: \"41a8b51a-55c4-476a-a895-6913c143f33a\") " pod="openshift-marketplace/redhat-operators-qvmxz" Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.833623 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s79p2\" (UniqueName: \"kubernetes.io/projected/41a8b51a-55c4-476a-a895-6913c143f33a-kube-api-access-s79p2\") pod \"redhat-operators-qvmxz\" (UID: \"41a8b51a-55c4-476a-a895-6913c143f33a\") " pod="openshift-marketplace/redhat-operators-qvmxz" Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.834136 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41a8b51a-55c4-476a-a895-6913c143f33a-catalog-content\") pod \"redhat-operators-qvmxz\" (UID: \"41a8b51a-55c4-476a-a895-6913c143f33a\") " pod="openshift-marketplace/redhat-operators-qvmxz" Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.834170 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41a8b51a-55c4-476a-a895-6913c143f33a-utilities\") pod \"redhat-operators-qvmxz\" (UID: \"41a8b51a-55c4-476a-a895-6913c143f33a\") " pod="openshift-marketplace/redhat-operators-qvmxz" Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.852180 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s79p2\" (UniqueName: \"kubernetes.io/projected/41a8b51a-55c4-476a-a895-6913c143f33a-kube-api-access-s79p2\") pod \"redhat-operators-qvmxz\" (UID: \"41a8b51a-55c4-476a-a895-6913c143f33a\") " pod="openshift-marketplace/redhat-operators-qvmxz" Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.917456 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvmxz" Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.992688 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2nkk7"] Feb 18 00:36:42 crc kubenswrapper[4858]: I0218 00:36:42.993763 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2nkk7" Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.009644 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2nkk7"] Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.036276 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01bb6b52-37a8-45cc-9675-f951757d4934-catalog-content\") pod \"redhat-operators-2nkk7\" (UID: \"01bb6b52-37a8-45cc-9675-f951757d4934\") " pod="openshift-marketplace/redhat-operators-2nkk7" Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.036336 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01bb6b52-37a8-45cc-9675-f951757d4934-utilities\") pod \"redhat-operators-2nkk7\" (UID: \"01bb6b52-37a8-45cc-9675-f951757d4934\") " pod="openshift-marketplace/redhat-operators-2nkk7" Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.036373 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkml8\" (UniqueName: \"kubernetes.io/projected/01bb6b52-37a8-45cc-9675-f951757d4934-kube-api-access-xkml8\") pod \"redhat-operators-2nkk7\" (UID: \"01bb6b52-37a8-45cc-9675-f951757d4934\") " pod="openshift-marketplace/redhat-operators-2nkk7" Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.078146 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-6djsl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.078394 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6djsl" podUID="b132ff06-2a28-42df-b43b-f923a76b4cca" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.078423 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-6djsl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.078449 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6djsl" podUID="b132ff06-2a28-42df-b43b-f923a76b4cca" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.137817 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01bb6b52-37a8-45cc-9675-f951757d4934-catalog-content\") pod \"redhat-operators-2nkk7\" (UID: \"01bb6b52-37a8-45cc-9675-f951757d4934\") " pod="openshift-marketplace/redhat-operators-2nkk7" Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.137872 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01bb6b52-37a8-45cc-9675-f951757d4934-utilities\") pod \"redhat-operators-2nkk7\" (UID: \"01bb6b52-37a8-45cc-9675-f951757d4934\") " pod="openshift-marketplace/redhat-operators-2nkk7" Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.137888 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkml8\" (UniqueName: \"kubernetes.io/projected/01bb6b52-37a8-45cc-9675-f951757d4934-kube-api-access-xkml8\") pod \"redhat-operators-2nkk7\" (UID: \"01bb6b52-37a8-45cc-9675-f951757d4934\") " pod="openshift-marketplace/redhat-operators-2nkk7" Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.138905 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01bb6b52-37a8-45cc-9675-f951757d4934-catalog-content\") pod \"redhat-operators-2nkk7\" (UID: \"01bb6b52-37a8-45cc-9675-f951757d4934\") " pod="openshift-marketplace/redhat-operators-2nkk7" Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.139114 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01bb6b52-37a8-45cc-9675-f951757d4934-utilities\") pod \"redhat-operators-2nkk7\" (UID: \"01bb6b52-37a8-45cc-9675-f951757d4934\") " pod="openshift-marketplace/redhat-operators-2nkk7" Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.155874 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkml8\" (UniqueName: \"kubernetes.io/projected/01bb6b52-37a8-45cc-9675-f951757d4934-kube-api-access-xkml8\") pod \"redhat-operators-2nkk7\" (UID: \"01bb6b52-37a8-45cc-9675-f951757d4934\") " pod="openshift-marketplace/redhat-operators-2nkk7" Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.220559 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.220602 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.226505 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.314345 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2nkk7" Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.375199 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qvmxz"] Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.375412 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.375454 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.381979 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:43 crc kubenswrapper[4858]: W0218 00:36:43.390387 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41a8b51a_55c4_476a_a895_6913c143f33a.slice/crio-cf74b8ec99d09618c69773582cd79c7a55aba9e572b20af021a5d67a627ec9bf WatchSource:0}: Error finding container cf74b8ec99d09618c69773582cd79c7a55aba9e572b20af021a5d67a627ec9bf: Status 404 returned error can't find the container with id cf74b8ec99d09618c69773582cd79c7a55aba9e572b20af021a5d67a627ec9bf Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.425129 4858 patch_prober.go:28] interesting pod/router-default-5444994796-mxwrb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:36:43 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 18 00:36:43 crc kubenswrapper[4858]: [+]process-running ok Feb 18 00:36:43 crc kubenswrapper[4858]: healthz check failed Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.425188 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mxwrb" podUID="afbe6075-e81f-464a-bfb5-7e97510ee945" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.428766 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-mxwrb" Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.591985 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvmxz" event={"ID":"41a8b51a-55c4-476a-a895-6913c143f33a","Type":"ContainerStarted","Data":"cf74b8ec99d09618c69773582cd79c7a55aba9e572b20af021a5d67a627ec9bf"} Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.598095 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-bc7mz" Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.607733 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6rx4q" Feb 18 00:36:43 crc kubenswrapper[4858]: I0218 00:36:43.720607 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2nkk7"] Feb 18 00:36:43 crc kubenswrapper[4858]: W0218 00:36:43.740115 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01bb6b52_37a8_45cc_9675_f951757d4934.slice/crio-2ff9787fa92db899d0898b79939f6baff6831cfecb839097e19acb14be618dfc WatchSource:0}: Error finding container 2ff9787fa92db899d0898b79939f6baff6831cfecb839097e19acb14be618dfc: Status 404 returned error can't find the container with id 2ff9787fa92db899d0898b79939f6baff6831cfecb839097e19acb14be618dfc Feb 18 00:36:44 crc kubenswrapper[4858]: I0218 00:36:44.425830 4858 patch_prober.go:28] interesting pod/router-default-5444994796-mxwrb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:36:44 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 18 00:36:44 crc kubenswrapper[4858]: [+]process-running ok Feb 18 00:36:44 crc kubenswrapper[4858]: healthz check failed Feb 18 00:36:44 crc kubenswrapper[4858]: I0218 00:36:44.426070 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mxwrb" podUID="afbe6075-e81f-464a-bfb5-7e97510ee945" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:36:44 crc kubenswrapper[4858]: I0218 00:36:44.613991 4858 generic.go:334] "Generic (PLEG): container finished" podID="01bb6b52-37a8-45cc-9675-f951757d4934" containerID="a976f1bc9161968ec256dbe3602076d5131c630bfc40bba6353f3d3c9d7b9157" exitCode=0 Feb 18 00:36:44 crc kubenswrapper[4858]: I0218 00:36:44.614122 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nkk7" event={"ID":"01bb6b52-37a8-45cc-9675-f951757d4934","Type":"ContainerDied","Data":"a976f1bc9161968ec256dbe3602076d5131c630bfc40bba6353f3d3c9d7b9157"} Feb 18 00:36:44 crc kubenswrapper[4858]: I0218 00:36:44.614172 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nkk7" event={"ID":"01bb6b52-37a8-45cc-9675-f951757d4934","Type":"ContainerStarted","Data":"2ff9787fa92db899d0898b79939f6baff6831cfecb839097e19acb14be618dfc"} Feb 18 00:36:44 crc kubenswrapper[4858]: I0218 00:36:44.619381 4858 generic.go:334] "Generic (PLEG): container finished" podID="41a8b51a-55c4-476a-a895-6913c143f33a" containerID="c3d34dcbba98eb1ec50bbf17794870fa0e88f090512a4be89898d4c6f64f7ed0" exitCode=0 Feb 18 00:36:44 crc kubenswrapper[4858]: I0218 00:36:44.620677 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvmxz" event={"ID":"41a8b51a-55c4-476a-a895-6913c143f33a","Type":"ContainerDied","Data":"c3d34dcbba98eb1ec50bbf17794870fa0e88f090512a4be89898d4c6f64f7ed0"} Feb 18 00:36:45 crc kubenswrapper[4858]: I0218 00:36:45.425675 4858 patch_prober.go:28] interesting pod/router-default-5444994796-mxwrb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:36:45 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 18 00:36:45 crc kubenswrapper[4858]: [+]process-running ok Feb 18 00:36:45 crc kubenswrapper[4858]: healthz check failed Feb 18 00:36:45 crc kubenswrapper[4858]: I0218 00:36:45.425737 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mxwrb" podUID="afbe6075-e81f-464a-bfb5-7e97510ee945" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:36:45 crc kubenswrapper[4858]: I0218 00:36:45.450076 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 00:36:45 crc kubenswrapper[4858]: I0218 00:36:45.451568 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 00:36:45 crc kubenswrapper[4858]: I0218 00:36:45.455344 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 00:36:45 crc kubenswrapper[4858]: I0218 00:36:45.457313 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 00:36:45 crc kubenswrapper[4858]: I0218 00:36:45.457682 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 00:36:45 crc kubenswrapper[4858]: I0218 00:36:45.487285 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1bc703f2-771f-4a86-a61e-e30c32192d53-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1bc703f2-771f-4a86-a61e-e30c32192d53\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 00:36:45 crc kubenswrapper[4858]: I0218 00:36:45.487330 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bc703f2-771f-4a86-a61e-e30c32192d53-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1bc703f2-771f-4a86-a61e-e30c32192d53\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 00:36:45 crc kubenswrapper[4858]: I0218 00:36:45.589018 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1bc703f2-771f-4a86-a61e-e30c32192d53-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1bc703f2-771f-4a86-a61e-e30c32192d53\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 00:36:45 crc kubenswrapper[4858]: I0218 00:36:45.589068 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bc703f2-771f-4a86-a61e-e30c32192d53-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1bc703f2-771f-4a86-a61e-e30c32192d53\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 00:36:45 crc kubenswrapper[4858]: I0218 00:36:45.589447 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1bc703f2-771f-4a86-a61e-e30c32192d53-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1bc703f2-771f-4a86-a61e-e30c32192d53\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 00:36:45 crc kubenswrapper[4858]: I0218 00:36:45.607224 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bc703f2-771f-4a86-a61e-e30c32192d53-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1bc703f2-771f-4a86-a61e-e30c32192d53\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 00:36:45 crc kubenswrapper[4858]: I0218 00:36:45.769588 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 00:36:46 crc kubenswrapper[4858]: I0218 00:36:46.131103 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-jnlh5" Feb 18 00:36:46 crc kubenswrapper[4858]: I0218 00:36:46.424757 4858 patch_prober.go:28] interesting pod/router-default-5444994796-mxwrb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:36:46 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 18 00:36:46 crc kubenswrapper[4858]: [+]process-running ok Feb 18 00:36:46 crc kubenswrapper[4858]: healthz check failed Feb 18 00:36:46 crc kubenswrapper[4858]: I0218 00:36:46.424804 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mxwrb" podUID="afbe6075-e81f-464a-bfb5-7e97510ee945" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:36:47 crc kubenswrapper[4858]: I0218 00:36:47.431155 4858 patch_prober.go:28] interesting pod/router-default-5444994796-mxwrb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:36:47 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 18 00:36:47 crc kubenswrapper[4858]: [+]process-running ok Feb 18 00:36:47 crc kubenswrapper[4858]: healthz check failed Feb 18 00:36:47 crc kubenswrapper[4858]: I0218 00:36:47.431203 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mxwrb" podUID="afbe6075-e81f-464a-bfb5-7e97510ee945" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:36:48 crc kubenswrapper[4858]: I0218 00:36:48.424007 4858 patch_prober.go:28] interesting pod/router-default-5444994796-mxwrb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:36:48 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 18 00:36:48 crc kubenswrapper[4858]: [+]process-running ok Feb 18 00:36:48 crc kubenswrapper[4858]: healthz check failed Feb 18 00:36:48 crc kubenswrapper[4858]: I0218 00:36:48.424299 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mxwrb" podUID="afbe6075-e81f-464a-bfb5-7e97510ee945" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:36:48 crc kubenswrapper[4858]: I0218 00:36:48.617767 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 00:36:48 crc kubenswrapper[4858]: I0218 00:36:48.687749 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2468fb90-c123-4c1a-8483-5af234b09c07","Type":"ContainerDied","Data":"cf131fa43f3ef82e294bade41b59c606984ff1484bb5133ae1fecdb181928f15"} Feb 18 00:36:48 crc kubenswrapper[4858]: I0218 00:36:48.687794 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf131fa43f3ef82e294bade41b59c606984ff1484bb5133ae1fecdb181928f15" Feb 18 00:36:48 crc kubenswrapper[4858]: I0218 00:36:48.687880 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 00:36:48 crc kubenswrapper[4858]: I0218 00:36:48.730941 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2468fb90-c123-4c1a-8483-5af234b09c07-kube-api-access\") pod \"2468fb90-c123-4c1a-8483-5af234b09c07\" (UID: \"2468fb90-c123-4c1a-8483-5af234b09c07\") " Feb 18 00:36:48 crc kubenswrapper[4858]: I0218 00:36:48.730988 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2468fb90-c123-4c1a-8483-5af234b09c07-kubelet-dir\") pod \"2468fb90-c123-4c1a-8483-5af234b09c07\" (UID: \"2468fb90-c123-4c1a-8483-5af234b09c07\") " Feb 18 00:36:48 crc kubenswrapper[4858]: I0218 00:36:48.731251 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2468fb90-c123-4c1a-8483-5af234b09c07-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2468fb90-c123-4c1a-8483-5af234b09c07" (UID: "2468fb90-c123-4c1a-8483-5af234b09c07"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:36:48 crc kubenswrapper[4858]: I0218 00:36:48.731868 4858 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2468fb90-c123-4c1a-8483-5af234b09c07-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:36:48 crc kubenswrapper[4858]: I0218 00:36:48.736788 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2468fb90-c123-4c1a-8483-5af234b09c07-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2468fb90-c123-4c1a-8483-5af234b09c07" (UID: "2468fb90-c123-4c1a-8483-5af234b09c07"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:36:48 crc kubenswrapper[4858]: I0218 00:36:48.833824 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2468fb90-c123-4c1a-8483-5af234b09c07-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:36:49 crc kubenswrapper[4858]: I0218 00:36:49.426911 4858 patch_prober.go:28] interesting pod/router-default-5444994796-mxwrb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:36:49 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 18 00:36:49 crc kubenswrapper[4858]: [+]process-running ok Feb 18 00:36:49 crc kubenswrapper[4858]: healthz check failed Feb 18 00:36:49 crc kubenswrapper[4858]: I0218 00:36:49.427265 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mxwrb" podUID="afbe6075-e81f-464a-bfb5-7e97510ee945" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:36:50 crc kubenswrapper[4858]: I0218 00:36:50.425094 4858 patch_prober.go:28] interesting pod/router-default-5444994796-mxwrb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:36:50 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 18 00:36:50 crc kubenswrapper[4858]: [+]process-running ok Feb 18 00:36:50 crc kubenswrapper[4858]: healthz check failed Feb 18 00:36:50 crc kubenswrapper[4858]: I0218 00:36:50.426703 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mxwrb" podUID="afbe6075-e81f-464a-bfb5-7e97510ee945" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:36:51 crc kubenswrapper[4858]: I0218 00:36:51.424558 4858 patch_prober.go:28] interesting pod/router-default-5444994796-mxwrb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:36:51 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 18 00:36:51 crc kubenswrapper[4858]: [+]process-running ok Feb 18 00:36:51 crc kubenswrapper[4858]: healthz check failed Feb 18 00:36:51 crc kubenswrapper[4858]: I0218 00:36:51.424621 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mxwrb" podUID="afbe6075-e81f-464a-bfb5-7e97510ee945" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:36:51 crc kubenswrapper[4858]: E0218 00:36:51.804108 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: Get \"https://access.redhat.com/webassets/docker/content/sigstore/redhat/certified-operator-index@sha256=39f3bdcc7b4d074d96a155496a08c4d7c31edef8655e36bff25553cd10753ba2/signature-2\": net/http: TLS handshake timeout" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 00:36:51 crc kubenswrapper[4858]: E0218 00:36:51.804278 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gpzbp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-cdx8z_openshift-marketplace(10004d92-1526-4fef-a0c1-dbd5077a46a0): ErrImagePull: copying system image from manifest list: reading signatures: Get \"https://access.redhat.com/webassets/docker/content/sigstore/redhat/certified-operator-index@sha256=39f3bdcc7b4d074d96a155496a08c4d7c31edef8655e36bff25553cd10753ba2/signature-2\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 18 00:36:51 crc kubenswrapper[4858]: E0218 00:36:51.805913 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: Get \\\"https://access.redhat.com/webassets/docker/content/sigstore/redhat/certified-operator-index@sha256=39f3bdcc7b4d074d96a155496a08c4d7c31edef8655e36bff25553cd10753ba2/signature-2\\\": net/http: TLS handshake timeout\"" pod="openshift-marketplace/certified-operators-cdx8z" podUID="10004d92-1526-4fef-a0c1-dbd5077a46a0" Feb 18 00:36:51 crc kubenswrapper[4858]: I0218 00:36:51.834936 4858 patch_prober.go:28] interesting pod/console-f9d7485db-lpg4n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 18 00:36:51 crc kubenswrapper[4858]: I0218 00:36:51.834994 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-lpg4n" podUID="a82bb6ce-4801-417a-a4e2-93d1667999ee" containerName="console" probeResult="failure" output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 18 00:36:52 crc kubenswrapper[4858]: I0218 00:36:52.425618 4858 patch_prober.go:28] interesting pod/router-default-5444994796-mxwrb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:36:52 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 18 00:36:52 crc kubenswrapper[4858]: [+]process-running ok Feb 18 00:36:52 crc kubenswrapper[4858]: healthz check failed Feb 18 00:36:52 crc kubenswrapper[4858]: I0218 00:36:52.425686 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mxwrb" podUID="afbe6075-e81f-464a-bfb5-7e97510ee945" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:36:52 crc kubenswrapper[4858]: E0218 00:36:52.846699 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-cdx8z" podUID="10004d92-1526-4fef-a0c1-dbd5077a46a0" Feb 18 00:36:52 crc kubenswrapper[4858]: E0218 00:36:52.981785 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: Get \"https://access.redhat.com/webassets/docker/content/sigstore/redhat/redhat-marketplace-index@sha256=7fa59a55753e6c646b3b56a1a7080a5d70767fb964f1857c411fdf4e05ad4c71/signature-1\": net/http: TLS handshake timeout" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 00:36:52 crc kubenswrapper[4858]: E0218 00:36:52.981955 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdlgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-zvz6w_openshift-marketplace(054e4a88-251f-4406-bbed-52397e7698b4): ErrImagePull: copying system image from manifest list: reading signatures: Get \"https://access.redhat.com/webassets/docker/content/sigstore/redhat/redhat-marketplace-index@sha256=7fa59a55753e6c646b3b56a1a7080a5d70767fb964f1857c411fdf4e05ad4c71/signature-1\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 18 00:36:52 crc kubenswrapper[4858]: E0218 00:36:52.983148 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: Get \\\"https://access.redhat.com/webassets/docker/content/sigstore/redhat/redhat-marketplace-index@sha256=7fa59a55753e6c646b3b56a1a7080a5d70767fb964f1857c411fdf4e05ad4c71/signature-1\\\": net/http: TLS handshake timeout\"" pod="openshift-marketplace/redhat-marketplace-zvz6w" podUID="054e4a88-251f-4406-bbed-52397e7698b4" Feb 18 00:36:53 crc kubenswrapper[4858]: I0218 00:36:53.084083 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-6djsl" Feb 18 00:36:53 crc kubenswrapper[4858]: I0218 00:36:53.428220 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-mxwrb" Feb 18 00:36:53 crc kubenswrapper[4858]: I0218 00:36:53.432786 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-mxwrb" Feb 18 00:36:55 crc kubenswrapper[4858]: E0218 00:36:55.231139 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-zvz6w" podUID="054e4a88-251f-4406-bbed-52397e7698b4" Feb 18 00:36:55 crc kubenswrapper[4858]: I0218 00:36:55.267451 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:36:55 crc kubenswrapper[4858]: I0218 00:36:55.267573 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:36:55 crc kubenswrapper[4858]: I0218 00:36:55.863505 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs\") pod \"network-metrics-daemon-jbdlz\" (UID: \"7064635a-c927-4499-98ce-76833fb5801c\") " pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:36:55 crc kubenswrapper[4858]: I0218 00:36:55.870590 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7064635a-c927-4499-98ce-76833fb5801c-metrics-certs\") pod \"network-metrics-daemon-jbdlz\" (UID: \"7064635a-c927-4499-98ce-76833fb5801c\") " pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:36:56 crc kubenswrapper[4858]: I0218 00:36:56.040903 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbdlz" Feb 18 00:36:59 crc kubenswrapper[4858]: E0218 00:36:59.010554 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: Get \"https://access.redhat.com/webassets/docker/content/sigstore/redhat/redhat-operator-index@sha256=e63978cf364b7726e184d2de62795955af608bc16e8db8063ca263c001bdb839/signature-1\": net/http: TLS handshake timeout" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 00:36:59 crc kubenswrapper[4858]: E0218 00:36:59.012448 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s79p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-qvmxz_openshift-marketplace(41a8b51a-55c4-476a-a895-6913c143f33a): ErrImagePull: copying system image from manifest list: reading signatures: Get \"https://access.redhat.com/webassets/docker/content/sigstore/redhat/redhat-operator-index@sha256=e63978cf364b7726e184d2de62795955af608bc16e8db8063ca263c001bdb839/signature-1\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 18 00:36:59 crc kubenswrapper[4858]: E0218 00:36:59.014298 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: Get \\\"https://access.redhat.com/webassets/docker/content/sigstore/redhat/redhat-operator-index@sha256=e63978cf364b7726e184d2de62795955af608bc16e8db8063ca263c001bdb839/signature-1\\\": net/http: TLS handshake timeout\"" pod="openshift-marketplace/redhat-operators-qvmxz" podUID="41a8b51a-55c4-476a-a895-6913c143f33a" Feb 18 00:36:59 crc kubenswrapper[4858]: I0218 00:36:59.074124 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 00:36:59 crc kubenswrapper[4858]: I0218 00:36:59.980798 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:37:01 crc kubenswrapper[4858]: I0218 00:37:01.839895 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:37:01 crc kubenswrapper[4858]: I0218 00:37:01.844301 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:37:02 crc kubenswrapper[4858]: E0218 00:37:02.796054 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-qvmxz" podUID="41a8b51a-55c4-476a-a895-6913c143f33a" Feb 18 00:37:02 crc kubenswrapper[4858]: W0218 00:37:02.813002 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1bc703f2_771f_4a86_a61e_e30c32192d53.slice/crio-671459b04c7d848bdf55b186e879052df3b98a809eafd48e29e570cd7b78975d WatchSource:0}: Error finding container 671459b04c7d848bdf55b186e879052df3b98a809eafd48e29e570cd7b78975d: Status 404 returned error can't find the container with id 671459b04c7d848bdf55b186e879052df3b98a809eafd48e29e570cd7b78975d Feb 18 00:37:03 crc kubenswrapper[4858]: I0218 00:37:03.266823 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jbdlz"] Feb 18 00:37:03 crc kubenswrapper[4858]: W0218 00:37:03.283216 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7064635a_c927_4499_98ce_76833fb5801c.slice/crio-df394f5cd06934f325ef28f5fa485308ac0655a09c9ee58a5dcb93d4df6ab825 WatchSource:0}: Error finding container df394f5cd06934f325ef28f5fa485308ac0655a09c9ee58a5dcb93d4df6ab825: Status 404 returned error can't find the container with id df394f5cd06934f325ef28f5fa485308ac0655a09c9ee58a5dcb93d4df6ab825 Feb 18 00:37:03 crc kubenswrapper[4858]: I0218 00:37:03.769696 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q49pf" event={"ID":"7244cf66-b72d-4f2d-a463-89e4e8e37b2c","Type":"ContainerStarted","Data":"f66a10a921f8e9e1b944388164dd5d62c7b3eb0a237cd49ee257b66c7890bc77"} Feb 18 00:37:03 crc kubenswrapper[4858]: I0218 00:37:03.773052 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k77hk" event={"ID":"9999291e-e811-4c10-8720-73bfeb32c3cf","Type":"ContainerStarted","Data":"7627c0bcb4aa2d07f7f66267d813a851d83e00de8ba75e91f2ba1846fe4074f5"} Feb 18 00:37:03 crc kubenswrapper[4858]: I0218 00:37:03.775201 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jbdlz" event={"ID":"7064635a-c927-4499-98ce-76833fb5801c","Type":"ContainerStarted","Data":"df394f5cd06934f325ef28f5fa485308ac0655a09c9ee58a5dcb93d4df6ab825"} Feb 18 00:37:03 crc kubenswrapper[4858]: I0218 00:37:03.777050 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1bc703f2-771f-4a86-a61e-e30c32192d53","Type":"ContainerStarted","Data":"53e0a46813849cbee5a2d2e784691790d381141d566996819a99b2037c1ba06e"} Feb 18 00:37:03 crc kubenswrapper[4858]: I0218 00:37:03.777119 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1bc703f2-771f-4a86-a61e-e30c32192d53","Type":"ContainerStarted","Data":"671459b04c7d848bdf55b186e879052df3b98a809eafd48e29e570cd7b78975d"} Feb 18 00:37:03 crc kubenswrapper[4858]: I0218 00:37:03.778739 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvp9z" event={"ID":"4f2604ed-931f-4fc1-96dc-cace175d2905","Type":"ContainerStarted","Data":"4076af75744d3e9c72caa84b802504bd7f0946c7c5058ec8bc1cc7f15429fec5"} Feb 18 00:37:03 crc kubenswrapper[4858]: I0218 00:37:03.782590 4858 generic.go:334] "Generic (PLEG): container finished" podID="b398b3cc-afb3-4dad-bcf7-f9b2c9278be6" containerID="30c9f9aec48aa54ef707d42ef795a626d68c9fcc859f6aab2ebcdade5357297a" exitCode=0 Feb 18 00:37:03 crc kubenswrapper[4858]: I0218 00:37:03.782639 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sx858" event={"ID":"b398b3cc-afb3-4dad-bcf7-f9b2c9278be6","Type":"ContainerDied","Data":"30c9f9aec48aa54ef707d42ef795a626d68c9fcc859f6aab2ebcdade5357297a"} Feb 18 00:37:04 crc kubenswrapper[4858]: I0218 00:37:03.847655 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=18.847626357 podStartE2EDuration="18.847626357s" podCreationTimestamp="2026-02-18 00:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:37:03.847401422 +0000 UTC m=+177.153238144" watchObservedRunningTime="2026-02-18 00:37:03.847626357 +0000 UTC m=+177.153463129" Feb 18 00:37:04 crc kubenswrapper[4858]: I0218 00:37:04.790701 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nkk7" event={"ID":"01bb6b52-37a8-45cc-9675-f951757d4934","Type":"ContainerStarted","Data":"3e6fd171f0233284c62132eebc0547ad95068557533846ee79f914c16e8ef255"} Feb 18 00:37:04 crc kubenswrapper[4858]: I0218 00:37:04.792609 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jbdlz" event={"ID":"7064635a-c927-4499-98ce-76833fb5801c","Type":"ContainerStarted","Data":"52c9979bf23c108a4331853d346d686e570674acfcad9adf55a19c27d967c6bd"} Feb 18 00:37:05 crc kubenswrapper[4858]: I0218 00:37:05.799323 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jbdlz" event={"ID":"7064635a-c927-4499-98ce-76833fb5801c","Type":"ContainerStarted","Data":"a3738695e127690a98e0eaf8e21c7dfa502af29aec0da80c2d11d20d7a63b0b2"} Feb 18 00:37:05 crc kubenswrapper[4858]: I0218 00:37:05.816434 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-jbdlz" podStartSLOduration=153.816413348 podStartE2EDuration="2m33.816413348s" podCreationTimestamp="2026-02-18 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:37:05.814566552 +0000 UTC m=+179.120403334" watchObservedRunningTime="2026-02-18 00:37:05.816413348 +0000 UTC m=+179.122250080" Feb 18 00:37:09 crc kubenswrapper[4858]: I0218 00:37:09.830838 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sx858" event={"ID":"b398b3cc-afb3-4dad-bcf7-f9b2c9278be6","Type":"ContainerStarted","Data":"e74971339ff1eb44181b19e66f8027bd70a12b4c60f3c7c68cd8a27f1a3ee7b5"} Feb 18 00:37:09 crc kubenswrapper[4858]: I0218 00:37:09.859523 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sx858" podStartSLOduration=3.246804077 podStartE2EDuration="28.859471044s" podCreationTimestamp="2026-02-18 00:36:41 +0000 UTC" firstStartedPulling="2026-02-18 00:36:42.552580615 +0000 UTC m=+155.858417347" lastFinishedPulling="2026-02-18 00:37:08.165247542 +0000 UTC m=+181.471084314" observedRunningTime="2026-02-18 00:37:09.857628879 +0000 UTC m=+183.163465651" watchObservedRunningTime="2026-02-18 00:37:09.859471044 +0000 UTC m=+183.165307806" Feb 18 00:37:11 crc kubenswrapper[4858]: I0218 00:37:11.533548 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sx858" Feb 18 00:37:11 crc kubenswrapper[4858]: I0218 00:37:11.535694 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sx858" Feb 18 00:37:12 crc kubenswrapper[4858]: I0218 00:37:12.852556 4858 generic.go:334] "Generic (PLEG): container finished" podID="1bc703f2-771f-4a86-a61e-e30c32192d53" containerID="53e0a46813849cbee5a2d2e784691790d381141d566996819a99b2037c1ba06e" exitCode=0 Feb 18 00:37:12 crc kubenswrapper[4858]: I0218 00:37:12.852701 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1bc703f2-771f-4a86-a61e-e30c32192d53","Type":"ContainerDied","Data":"53e0a46813849cbee5a2d2e784691790d381141d566996819a99b2037c1ba06e"} Feb 18 00:37:12 crc kubenswrapper[4858]: I0218 00:37:12.854861 4858 generic.go:334] "Generic (PLEG): container finished" podID="4f2604ed-931f-4fc1-96dc-cace175d2905" containerID="4076af75744d3e9c72caa84b802504bd7f0946c7c5058ec8bc1cc7f15429fec5" exitCode=0 Feb 18 00:37:12 crc kubenswrapper[4858]: I0218 00:37:12.854932 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvp9z" event={"ID":"4f2604ed-931f-4fc1-96dc-cace175d2905","Type":"ContainerDied","Data":"4076af75744d3e9c72caa84b802504bd7f0946c7c5058ec8bc1cc7f15429fec5"} Feb 18 00:37:12 crc kubenswrapper[4858]: I0218 00:37:12.858023 4858 generic.go:334] "Generic (PLEG): container finished" podID="01bb6b52-37a8-45cc-9675-f951757d4934" containerID="3e6fd171f0233284c62132eebc0547ad95068557533846ee79f914c16e8ef255" exitCode=0 Feb 18 00:37:12 crc kubenswrapper[4858]: I0218 00:37:12.858112 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nkk7" event={"ID":"01bb6b52-37a8-45cc-9675-f951757d4934","Type":"ContainerDied","Data":"3e6fd171f0233284c62132eebc0547ad95068557533846ee79f914c16e8ef255"} Feb 18 00:37:12 crc kubenswrapper[4858]: I0218 00:37:12.860170 4858 generic.go:334] "Generic (PLEG): container finished" podID="7244cf66-b72d-4f2d-a463-89e4e8e37b2c" containerID="f66a10a921f8e9e1b944388164dd5d62c7b3eb0a237cd49ee257b66c7890bc77" exitCode=0 Feb 18 00:37:12 crc kubenswrapper[4858]: I0218 00:37:12.860225 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q49pf" event={"ID":"7244cf66-b72d-4f2d-a463-89e4e8e37b2c","Type":"ContainerDied","Data":"f66a10a921f8e9e1b944388164dd5d62c7b3eb0a237cd49ee257b66c7890bc77"} Feb 18 00:37:12 crc kubenswrapper[4858]: I0218 00:37:12.861588 4858 generic.go:334] "Generic (PLEG): container finished" podID="5977586b-6538-4050-bfde-dde62e4d87cd" containerID="1f69f73bd6d9598c297113b1b247f931194ceace1ff463434880b6dba90caef0" exitCode=0 Feb 18 00:37:12 crc kubenswrapper[4858]: I0218 00:37:12.861650 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29522880-n4bvp" event={"ID":"5977586b-6538-4050-bfde-dde62e4d87cd","Type":"ContainerDied","Data":"1f69f73bd6d9598c297113b1b247f931194ceace1ff463434880b6dba90caef0"} Feb 18 00:37:12 crc kubenswrapper[4858]: I0218 00:37:12.864109 4858 generic.go:334] "Generic (PLEG): container finished" podID="9999291e-e811-4c10-8720-73bfeb32c3cf" containerID="7627c0bcb4aa2d07f7f66267d813a851d83e00de8ba75e91f2ba1846fe4074f5" exitCode=0 Feb 18 00:37:12 crc kubenswrapper[4858]: I0218 00:37:12.864177 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k77hk" event={"ID":"9999291e-e811-4c10-8720-73bfeb32c3cf","Type":"ContainerDied","Data":"7627c0bcb4aa2d07f7f66267d813a851d83e00de8ba75e91f2ba1846fe4074f5"} Feb 18 00:37:12 crc kubenswrapper[4858]: I0218 00:37:12.866353 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdx8z" event={"ID":"10004d92-1526-4fef-a0c1-dbd5077a46a0","Type":"ContainerStarted","Data":"befd3930a344f3b47d61233f6afb4b709833977d9514c9ee717956d2c9aa8c03"} Feb 18 00:37:13 crc kubenswrapper[4858]: I0218 00:37:13.416053 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-sx858" podUID="b398b3cc-afb3-4dad-bcf7-f9b2c9278be6" containerName="registry-server" probeResult="failure" output=< Feb 18 00:37:13 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Feb 18 00:37:13 crc kubenswrapper[4858]: > Feb 18 00:37:13 crc kubenswrapper[4858]: I0218 00:37:13.874615 4858 generic.go:334] "Generic (PLEG): container finished" podID="10004d92-1526-4fef-a0c1-dbd5077a46a0" containerID="befd3930a344f3b47d61233f6afb4b709833977d9514c9ee717956d2c9aa8c03" exitCode=0 Feb 18 00:37:13 crc kubenswrapper[4858]: I0218 00:37:13.874714 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdx8z" event={"ID":"10004d92-1526-4fef-a0c1-dbd5077a46a0","Type":"ContainerDied","Data":"befd3930a344f3b47d61233f6afb4b709833977d9514c9ee717956d2c9aa8c03"} Feb 18 00:37:13 crc kubenswrapper[4858]: I0218 00:37:13.879212 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvz6w" event={"ID":"054e4a88-251f-4406-bbed-52397e7698b4","Type":"ContainerStarted","Data":"3a1c7b34be6ca6d40eaf7fb868f17a5e4568265b9dc6679a0c0eb21578fc06d9"} Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.035911 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-smdtb" Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.387511 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29522880-n4bvp" Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.416843 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.511535 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bc703f2-771f-4a86-a61e-e30c32192d53-kube-api-access\") pod \"1bc703f2-771f-4a86-a61e-e30c32192d53\" (UID: \"1bc703f2-771f-4a86-a61e-e30c32192d53\") " Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.511630 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1bc703f2-771f-4a86-a61e-e30c32192d53-kubelet-dir\") pod \"1bc703f2-771f-4a86-a61e-e30c32192d53\" (UID: \"1bc703f2-771f-4a86-a61e-e30c32192d53\") " Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.511681 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5977586b-6538-4050-bfde-dde62e4d87cd-serviceca\") pod \"5977586b-6538-4050-bfde-dde62e4d87cd\" (UID: \"5977586b-6538-4050-bfde-dde62e4d87cd\") " Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.511728 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqllt\" (UniqueName: \"kubernetes.io/projected/5977586b-6538-4050-bfde-dde62e4d87cd-kube-api-access-hqllt\") pod \"5977586b-6538-4050-bfde-dde62e4d87cd\" (UID: \"5977586b-6538-4050-bfde-dde62e4d87cd\") " Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.511771 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1bc703f2-771f-4a86-a61e-e30c32192d53-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1bc703f2-771f-4a86-a61e-e30c32192d53" (UID: "1bc703f2-771f-4a86-a61e-e30c32192d53"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.511946 4858 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1bc703f2-771f-4a86-a61e-e30c32192d53-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.512505 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5977586b-6538-4050-bfde-dde62e4d87cd-serviceca" (OuterVolumeSpecName: "serviceca") pod "5977586b-6538-4050-bfde-dde62e4d87cd" (UID: "5977586b-6538-4050-bfde-dde62e4d87cd"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.517609 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5977586b-6538-4050-bfde-dde62e4d87cd-kube-api-access-hqllt" (OuterVolumeSpecName: "kube-api-access-hqllt") pod "5977586b-6538-4050-bfde-dde62e4d87cd" (UID: "5977586b-6538-4050-bfde-dde62e4d87cd"). InnerVolumeSpecName "kube-api-access-hqllt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.517731 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bc703f2-771f-4a86-a61e-e30c32192d53-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1bc703f2-771f-4a86-a61e-e30c32192d53" (UID: "1bc703f2-771f-4a86-a61e-e30c32192d53"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.612878 4858 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5977586b-6538-4050-bfde-dde62e4d87cd-serviceca\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.612920 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqllt\" (UniqueName: \"kubernetes.io/projected/5977586b-6538-4050-bfde-dde62e4d87cd-kube-api-access-hqllt\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.612935 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bc703f2-771f-4a86-a61e-e30c32192d53-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.885117 4858 generic.go:334] "Generic (PLEG): container finished" podID="054e4a88-251f-4406-bbed-52397e7698b4" containerID="3a1c7b34be6ca6d40eaf7fb868f17a5e4568265b9dc6679a0c0eb21578fc06d9" exitCode=0 Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.885174 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvz6w" event={"ID":"054e4a88-251f-4406-bbed-52397e7698b4","Type":"ContainerDied","Data":"3a1c7b34be6ca6d40eaf7fb868f17a5e4568265b9dc6679a0c0eb21578fc06d9"} Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.886651 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1bc703f2-771f-4a86-a61e-e30c32192d53","Type":"ContainerDied","Data":"671459b04c7d848bdf55b186e879052df3b98a809eafd48e29e570cd7b78975d"} Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.886679 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.886697 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="671459b04c7d848bdf55b186e879052df3b98a809eafd48e29e570cd7b78975d" Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.888567 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdx8z" event={"ID":"10004d92-1526-4fef-a0c1-dbd5077a46a0","Type":"ContainerStarted","Data":"2758533a0a6de1a33a9239d60ae23e3411248f760a5f881ca855041c8d81f1f9"} Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.893166 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvp9z" event={"ID":"4f2604ed-931f-4fc1-96dc-cace175d2905","Type":"ContainerStarted","Data":"a7d533ba59d063dbe87c0745ca91b9b14451019f6f6c9fa89d3493d227517992"} Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.894817 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nkk7" event={"ID":"01bb6b52-37a8-45cc-9675-f951757d4934","Type":"ContainerStarted","Data":"c108927e9745ab7e29525befe2e3fad3bf738b4636015e55ded101a2edd08ced"} Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.896608 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q49pf" event={"ID":"7244cf66-b72d-4f2d-a463-89e4e8e37b2c","Type":"ContainerStarted","Data":"87da1da65ba49d88c6be61886c7d6f862c92203a3a4248ba93274064199057d1"} Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.898619 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k77hk" event={"ID":"9999291e-e811-4c10-8720-73bfeb32c3cf","Type":"ContainerStarted","Data":"0af248eaa4a6ef7ee65326cee55ff3592d45d03e311ea8da26d590d391b09aab"} Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.900300 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29522880-n4bvp" event={"ID":"5977586b-6538-4050-bfde-dde62e4d87cd","Type":"ContainerDied","Data":"3ef2d1fbd2355beb4b124f9f7ce775887e3da6b72135e8e274cf11a648852266"} Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.900328 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ef2d1fbd2355beb4b124f9f7ce775887e3da6b72135e8e274cf11a648852266" Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.900372 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29522880-n4bvp" Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.929204 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hvp9z" podStartSLOduration=2.521152918 podStartE2EDuration="35.929185692s" podCreationTimestamp="2026-02-18 00:36:39 +0000 UTC" firstStartedPulling="2026-02-18 00:36:40.503625872 +0000 UTC m=+153.809462604" lastFinishedPulling="2026-02-18 00:37:13.911658646 +0000 UTC m=+187.217495378" observedRunningTime="2026-02-18 00:37:14.926703041 +0000 UTC m=+188.232539773" watchObservedRunningTime="2026-02-18 00:37:14.929185692 +0000 UTC m=+188.235022424" Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.948710 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cdx8z" podStartSLOduration=2.185789092 podStartE2EDuration="35.948693182s" podCreationTimestamp="2026-02-18 00:36:39 +0000 UTC" firstStartedPulling="2026-02-18 00:36:40.529630431 +0000 UTC m=+153.835467163" lastFinishedPulling="2026-02-18 00:37:14.292534521 +0000 UTC m=+187.598371253" observedRunningTime="2026-02-18 00:37:14.946115249 +0000 UTC m=+188.251951991" watchObservedRunningTime="2026-02-18 00:37:14.948693182 +0000 UTC m=+188.254529914" Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.988567 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k77hk" podStartSLOduration=2.513502668 podStartE2EDuration="35.988552063s" podCreationTimestamp="2026-02-18 00:36:39 +0000 UTC" firstStartedPulling="2026-02-18 00:36:40.524205228 +0000 UTC m=+153.830041960" lastFinishedPulling="2026-02-18 00:37:13.999254623 +0000 UTC m=+187.305091355" observedRunningTime="2026-02-18 00:37:14.985144319 +0000 UTC m=+188.290981051" watchObservedRunningTime="2026-02-18 00:37:14.988552063 +0000 UTC m=+188.294388795" Feb 18 00:37:14 crc kubenswrapper[4858]: I0218 00:37:14.988782 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2nkk7" podStartSLOduration=7.457046603 podStartE2EDuration="32.988778858s" podCreationTimestamp="2026-02-18 00:36:42 +0000 UTC" firstStartedPulling="2026-02-18 00:36:48.567134389 +0000 UTC m=+161.872971121" lastFinishedPulling="2026-02-18 00:37:14.098866634 +0000 UTC m=+187.404703376" observedRunningTime="2026-02-18 00:37:14.965695781 +0000 UTC m=+188.271532513" watchObservedRunningTime="2026-02-18 00:37:14.988778858 +0000 UTC m=+188.294615590" Feb 18 00:37:15 crc kubenswrapper[4858]: I0218 00:37:15.032032 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q49pf" podStartSLOduration=3.590230961 podStartE2EDuration="36.032016763s" podCreationTimestamp="2026-02-18 00:36:39 +0000 UTC" firstStartedPulling="2026-02-18 00:36:41.542305557 +0000 UTC m=+154.848142289" lastFinishedPulling="2026-02-18 00:37:13.984091359 +0000 UTC m=+187.289928091" observedRunningTime="2026-02-18 00:37:15.031228594 +0000 UTC m=+188.337065336" watchObservedRunningTime="2026-02-18 00:37:15.032016763 +0000 UTC m=+188.337853495" Feb 18 00:37:15 crc kubenswrapper[4858]: I0218 00:37:15.467316 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:37:15 crc kubenswrapper[4858]: I0218 00:37:15.907320 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvz6w" event={"ID":"054e4a88-251f-4406-bbed-52397e7698b4","Type":"ContainerStarted","Data":"b51f3f99ea0349adcf427c036b6f2d0da1fb601e98a13b7e75748137854b03d9"} Feb 18 00:37:15 crc kubenswrapper[4858]: I0218 00:37:15.929370 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zvz6w" podStartSLOduration=2.158800097 podStartE2EDuration="34.92935437s" podCreationTimestamp="2026-02-18 00:36:41 +0000 UTC" firstStartedPulling="2026-02-18 00:36:42.549854508 +0000 UTC m=+155.855691240" lastFinishedPulling="2026-02-18 00:37:15.320408781 +0000 UTC m=+188.626245513" observedRunningTime="2026-02-18 00:37:15.925717051 +0000 UTC m=+189.231553783" watchObservedRunningTime="2026-02-18 00:37:15.92935437 +0000 UTC m=+189.235191102" Feb 18 00:37:18 crc kubenswrapper[4858]: I0218 00:37:18.923147 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvmxz" event={"ID":"41a8b51a-55c4-476a-a895-6913c143f33a","Type":"ContainerStarted","Data":"580b01b413b3d8ace7bdcc6753163b41946d2b93c56814b932e84ffd4694d680"} Feb 18 00:37:19 crc kubenswrapper[4858]: E0218 00:37:19.175431 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41a8b51a_55c4_476a_a895_6913c143f33a.slice/crio-580b01b413b3d8ace7bdcc6753163b41946d2b93c56814b932e84ffd4694d680.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41a8b51a_55c4_476a_a895_6913c143f33a.slice/crio-conmon-580b01b413b3d8ace7bdcc6753163b41946d2b93c56814b932e84ffd4694d680.scope\": RecentStats: unable to find data in memory cache]" Feb 18 00:37:19 crc kubenswrapper[4858]: I0218 00:37:19.534309 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cdx8z" Feb 18 00:37:19 crc kubenswrapper[4858]: I0218 00:37:19.536124 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cdx8z" Feb 18 00:37:19 crc kubenswrapper[4858]: I0218 00:37:19.612787 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cdx8z" Feb 18 00:37:19 crc kubenswrapper[4858]: I0218 00:37:19.725663 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hvp9z" Feb 18 00:37:19 crc kubenswrapper[4858]: I0218 00:37:19.725716 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hvp9z" Feb 18 00:37:19 crc kubenswrapper[4858]: I0218 00:37:19.798888 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hvp9z" Feb 18 00:37:19 crc kubenswrapper[4858]: I0218 00:37:19.921743 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k77hk" Feb 18 00:37:19 crc kubenswrapper[4858]: I0218 00:37:19.922078 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k77hk" Feb 18 00:37:19 crc kubenswrapper[4858]: I0218 00:37:19.930973 4858 generic.go:334] "Generic (PLEG): container finished" podID="41a8b51a-55c4-476a-a895-6913c143f33a" containerID="580b01b413b3d8ace7bdcc6753163b41946d2b93c56814b932e84ffd4694d680" exitCode=0 Feb 18 00:37:19 crc kubenswrapper[4858]: I0218 00:37:19.931664 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvmxz" event={"ID":"41a8b51a-55c4-476a-a895-6913c143f33a","Type":"ContainerDied","Data":"580b01b413b3d8ace7bdcc6753163b41946d2b93c56814b932e84ffd4694d680"} Feb 18 00:37:19 crc kubenswrapper[4858]: I0218 00:37:19.973623 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hvp9z" Feb 18 00:37:19 crc kubenswrapper[4858]: I0218 00:37:19.974653 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cdx8z" Feb 18 00:37:20 crc kubenswrapper[4858]: I0218 00:37:20.004067 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k77hk" Feb 18 00:37:20 crc kubenswrapper[4858]: I0218 00:37:20.133630 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q49pf" Feb 18 00:37:20 crc kubenswrapper[4858]: I0218 00:37:20.133692 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q49pf" Feb 18 00:37:20 crc kubenswrapper[4858]: I0218 00:37:20.173808 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q49pf" Feb 18 00:37:20 crc kubenswrapper[4858]: I0218 00:37:20.939604 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvmxz" event={"ID":"41a8b51a-55c4-476a-a895-6913c143f33a","Type":"ContainerStarted","Data":"ae880ef5779bdae6cbb7e0f9ff7667bc750a11e9344e74259a443bb41775d5a5"} Feb 18 00:37:20 crc kubenswrapper[4858]: I0218 00:37:20.967689 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qvmxz" podStartSLOduration=7.145801642 podStartE2EDuration="38.967666225s" podCreationTimestamp="2026-02-18 00:36:42 +0000 UTC" firstStartedPulling="2026-02-18 00:36:48.56636836 +0000 UTC m=+161.872205092" lastFinishedPulling="2026-02-18 00:37:20.388232933 +0000 UTC m=+193.694069675" observedRunningTime="2026-02-18 00:37:20.96380617 +0000 UTC m=+194.269642902" watchObservedRunningTime="2026-02-18 00:37:20.967666225 +0000 UTC m=+194.273502997" Feb 18 00:37:20 crc kubenswrapper[4858]: I0218 00:37:20.993863 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k77hk" Feb 18 00:37:20 crc kubenswrapper[4858]: I0218 00:37:20.994181 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q49pf" Feb 18 00:37:21 crc kubenswrapper[4858]: I0218 00:37:21.574799 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sx858" Feb 18 00:37:21 crc kubenswrapper[4858]: I0218 00:37:21.628156 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sx858" Feb 18 00:37:21 crc kubenswrapper[4858]: I0218 00:37:21.909929 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zvz6w" Feb 18 00:37:21 crc kubenswrapper[4858]: I0218 00:37:21.910221 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zvz6w" Feb 18 00:37:21 crc kubenswrapper[4858]: I0218 00:37:21.966714 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zvz6w" Feb 18 00:37:22 crc kubenswrapper[4858]: I0218 00:37:22.017080 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zvz6w" Feb 18 00:37:22 crc kubenswrapper[4858]: I0218 00:37:22.447188 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 00:37:22 crc kubenswrapper[4858]: E0218 00:37:22.447596 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5977586b-6538-4050-bfde-dde62e4d87cd" containerName="image-pruner" Feb 18 00:37:22 crc kubenswrapper[4858]: I0218 00:37:22.447614 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5977586b-6538-4050-bfde-dde62e4d87cd" containerName="image-pruner" Feb 18 00:37:22 crc kubenswrapper[4858]: E0218 00:37:22.447635 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2468fb90-c123-4c1a-8483-5af234b09c07" containerName="pruner" Feb 18 00:37:22 crc kubenswrapper[4858]: I0218 00:37:22.447643 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2468fb90-c123-4c1a-8483-5af234b09c07" containerName="pruner" Feb 18 00:37:22 crc kubenswrapper[4858]: E0218 00:37:22.447668 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bc703f2-771f-4a86-a61e-e30c32192d53" containerName="pruner" Feb 18 00:37:22 crc kubenswrapper[4858]: I0218 00:37:22.447678 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bc703f2-771f-4a86-a61e-e30c32192d53" containerName="pruner" Feb 18 00:37:22 crc kubenswrapper[4858]: I0218 00:37:22.447907 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bc703f2-771f-4a86-a61e-e30c32192d53" containerName="pruner" Feb 18 00:37:22 crc kubenswrapper[4858]: I0218 00:37:22.447931 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2468fb90-c123-4c1a-8483-5af234b09c07" containerName="pruner" Feb 18 00:37:22 crc kubenswrapper[4858]: I0218 00:37:22.447945 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5977586b-6538-4050-bfde-dde62e4d87cd" containerName="image-pruner" Feb 18 00:37:22 crc kubenswrapper[4858]: I0218 00:37:22.448965 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 00:37:22 crc kubenswrapper[4858]: I0218 00:37:22.453000 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 00:37:22 crc kubenswrapper[4858]: I0218 00:37:22.456733 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 00:37:22 crc kubenswrapper[4858]: I0218 00:37:22.474542 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 00:37:22 crc kubenswrapper[4858]: I0218 00:37:22.516351 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e178f906-7b55-43d5-afee-d37997e462ff-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e178f906-7b55-43d5-afee-d37997e462ff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 00:37:22 crc kubenswrapper[4858]: I0218 00:37:22.516402 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e178f906-7b55-43d5-afee-d37997e462ff-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e178f906-7b55-43d5-afee-d37997e462ff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 00:37:22 crc kubenswrapper[4858]: I0218 00:37:22.617720 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e178f906-7b55-43d5-afee-d37997e462ff-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e178f906-7b55-43d5-afee-d37997e462ff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 00:37:22 crc kubenswrapper[4858]: I0218 00:37:22.617768 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e178f906-7b55-43d5-afee-d37997e462ff-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e178f906-7b55-43d5-afee-d37997e462ff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 00:37:22 crc kubenswrapper[4858]: I0218 00:37:22.618171 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e178f906-7b55-43d5-afee-d37997e462ff-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e178f906-7b55-43d5-afee-d37997e462ff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 00:37:22 crc kubenswrapper[4858]: I0218 00:37:22.636811 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e178f906-7b55-43d5-afee-d37997e462ff-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e178f906-7b55-43d5-afee-d37997e462ff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 00:37:22 crc kubenswrapper[4858]: I0218 00:37:22.787678 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 00:37:22 crc kubenswrapper[4858]: I0218 00:37:22.918599 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qvmxz" Feb 18 00:37:22 crc kubenswrapper[4858]: I0218 00:37:22.918844 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qvmxz" Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.199224 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q49pf"] Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.199431 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q49pf" podUID="7244cf66-b72d-4f2d-a463-89e4e8e37b2c" containerName="registry-server" containerID="cri-o://87da1da65ba49d88c6be61886c7d6f862c92203a3a4248ba93274064199057d1" gracePeriod=2 Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.271694 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 00:37:23 crc kubenswrapper[4858]: W0218 00:37:23.286813 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode178f906_7b55_43d5_afee_d37997e462ff.slice/crio-4024adf4bd875feba2f1f69d773afc51e9e7ca81d50f77304308f72216c05ce6 WatchSource:0}: Error finding container 4024adf4bd875feba2f1f69d773afc51e9e7ca81d50f77304308f72216c05ce6: Status 404 returned error can't find the container with id 4024adf4bd875feba2f1f69d773afc51e9e7ca81d50f77304308f72216c05ce6 Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.314822 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2nkk7" Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.314877 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2nkk7" Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.352243 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2nkk7" Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.497093 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cjd57"] Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.630506 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q49pf" Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.732898 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snh7b\" (UniqueName: \"kubernetes.io/projected/7244cf66-b72d-4f2d-a463-89e4e8e37b2c-kube-api-access-snh7b\") pod \"7244cf66-b72d-4f2d-a463-89e4e8e37b2c\" (UID: \"7244cf66-b72d-4f2d-a463-89e4e8e37b2c\") " Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.732989 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7244cf66-b72d-4f2d-a463-89e4e8e37b2c-utilities\") pod \"7244cf66-b72d-4f2d-a463-89e4e8e37b2c\" (UID: \"7244cf66-b72d-4f2d-a463-89e4e8e37b2c\") " Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.733046 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7244cf66-b72d-4f2d-a463-89e4e8e37b2c-catalog-content\") pod \"7244cf66-b72d-4f2d-a463-89e4e8e37b2c\" (UID: \"7244cf66-b72d-4f2d-a463-89e4e8e37b2c\") " Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.737177 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7244cf66-b72d-4f2d-a463-89e4e8e37b2c-utilities" (OuterVolumeSpecName: "utilities") pod "7244cf66-b72d-4f2d-a463-89e4e8e37b2c" (UID: "7244cf66-b72d-4f2d-a463-89e4e8e37b2c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.742351 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7244cf66-b72d-4f2d-a463-89e4e8e37b2c-kube-api-access-snh7b" (OuterVolumeSpecName: "kube-api-access-snh7b") pod "7244cf66-b72d-4f2d-a463-89e4e8e37b2c" (UID: "7244cf66-b72d-4f2d-a463-89e4e8e37b2c"). InnerVolumeSpecName "kube-api-access-snh7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.798402 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7244cf66-b72d-4f2d-a463-89e4e8e37b2c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7244cf66-b72d-4f2d-a463-89e4e8e37b2c" (UID: "7244cf66-b72d-4f2d-a463-89e4e8e37b2c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.834461 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snh7b\" (UniqueName: \"kubernetes.io/projected/7244cf66-b72d-4f2d-a463-89e4e8e37b2c-kube-api-access-snh7b\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.834508 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7244cf66-b72d-4f2d-a463-89e4e8e37b2c-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.834519 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7244cf66-b72d-4f2d-a463-89e4e8e37b2c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.961898 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e178f906-7b55-43d5-afee-d37997e462ff","Type":"ContainerStarted","Data":"1605da0d7a29e6f22b9a54ae6b8321cf6ae692191f0eb8ea2f8acd7359b7f81a"} Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.961936 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e178f906-7b55-43d5-afee-d37997e462ff","Type":"ContainerStarted","Data":"4024adf4bd875feba2f1f69d773afc51e9e7ca81d50f77304308f72216c05ce6"} Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.963967 4858 generic.go:334] "Generic (PLEG): container finished" podID="7244cf66-b72d-4f2d-a463-89e4e8e37b2c" containerID="87da1da65ba49d88c6be61886c7d6f862c92203a3a4248ba93274064199057d1" exitCode=0 Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.964390 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q49pf" event={"ID":"7244cf66-b72d-4f2d-a463-89e4e8e37b2c","Type":"ContainerDied","Data":"87da1da65ba49d88c6be61886c7d6f862c92203a3a4248ba93274064199057d1"} Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.964462 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q49pf" Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.964449 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q49pf" event={"ID":"7244cf66-b72d-4f2d-a463-89e4e8e37b2c","Type":"ContainerDied","Data":"8590f7f6632618e91c84bdf67dcc20921595dea1f1b2c759f0d302d597c945da"} Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.964599 4858 scope.go:117] "RemoveContainer" containerID="87da1da65ba49d88c6be61886c7d6f862c92203a3a4248ba93274064199057d1" Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.972487 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qvmxz" podUID="41a8b51a-55c4-476a-a895-6913c143f33a" containerName="registry-server" probeResult="failure" output=< Feb 18 00:37:23 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Feb 18 00:37:23 crc kubenswrapper[4858]: > Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.978051 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.978036739 podStartE2EDuration="1.978036739s" podCreationTimestamp="2026-02-18 00:37:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:37:23.977429609 +0000 UTC m=+197.283266341" watchObservedRunningTime="2026-02-18 00:37:23.978036739 +0000 UTC m=+197.283873461" Feb 18 00:37:23 crc kubenswrapper[4858]: I0218 00:37:23.979898 4858 scope.go:117] "RemoveContainer" containerID="f66a10a921f8e9e1b944388164dd5d62c7b3eb0a237cd49ee257b66c7890bc77" Feb 18 00:37:24 crc kubenswrapper[4858]: I0218 00:37:24.010236 4858 scope.go:117] "RemoveContainer" containerID="1aa141a313f29a89a1b0860fd041b4f62dc252de8a84434b6a67663374ba474f" Feb 18 00:37:24 crc kubenswrapper[4858]: I0218 00:37:24.013094 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q49pf"] Feb 18 00:37:24 crc kubenswrapper[4858]: I0218 00:37:24.014467 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2nkk7" Feb 18 00:37:24 crc kubenswrapper[4858]: I0218 00:37:24.016319 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q49pf"] Feb 18 00:37:24 crc kubenswrapper[4858]: I0218 00:37:24.030708 4858 scope.go:117] "RemoveContainer" containerID="87da1da65ba49d88c6be61886c7d6f862c92203a3a4248ba93274064199057d1" Feb 18 00:37:24 crc kubenswrapper[4858]: E0218 00:37:24.031269 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87da1da65ba49d88c6be61886c7d6f862c92203a3a4248ba93274064199057d1\": container with ID starting with 87da1da65ba49d88c6be61886c7d6f862c92203a3a4248ba93274064199057d1 not found: ID does not exist" containerID="87da1da65ba49d88c6be61886c7d6f862c92203a3a4248ba93274064199057d1" Feb 18 00:37:24 crc kubenswrapper[4858]: I0218 00:37:24.031302 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87da1da65ba49d88c6be61886c7d6f862c92203a3a4248ba93274064199057d1"} err="failed to get container status \"87da1da65ba49d88c6be61886c7d6f862c92203a3a4248ba93274064199057d1\": rpc error: code = NotFound desc = could not find container \"87da1da65ba49d88c6be61886c7d6f862c92203a3a4248ba93274064199057d1\": container with ID starting with 87da1da65ba49d88c6be61886c7d6f862c92203a3a4248ba93274064199057d1 not found: ID does not exist" Feb 18 00:37:24 crc kubenswrapper[4858]: I0218 00:37:24.031344 4858 scope.go:117] "RemoveContainer" containerID="f66a10a921f8e9e1b944388164dd5d62c7b3eb0a237cd49ee257b66c7890bc77" Feb 18 00:37:24 crc kubenswrapper[4858]: E0218 00:37:24.031931 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f66a10a921f8e9e1b944388164dd5d62c7b3eb0a237cd49ee257b66c7890bc77\": container with ID starting with f66a10a921f8e9e1b944388164dd5d62c7b3eb0a237cd49ee257b66c7890bc77 not found: ID does not exist" containerID="f66a10a921f8e9e1b944388164dd5d62c7b3eb0a237cd49ee257b66c7890bc77" Feb 18 00:37:24 crc kubenswrapper[4858]: I0218 00:37:24.032012 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f66a10a921f8e9e1b944388164dd5d62c7b3eb0a237cd49ee257b66c7890bc77"} err="failed to get container status \"f66a10a921f8e9e1b944388164dd5d62c7b3eb0a237cd49ee257b66c7890bc77\": rpc error: code = NotFound desc = could not find container \"f66a10a921f8e9e1b944388164dd5d62c7b3eb0a237cd49ee257b66c7890bc77\": container with ID starting with f66a10a921f8e9e1b944388164dd5d62c7b3eb0a237cd49ee257b66c7890bc77 not found: ID does not exist" Feb 18 00:37:24 crc kubenswrapper[4858]: I0218 00:37:24.032063 4858 scope.go:117] "RemoveContainer" containerID="1aa141a313f29a89a1b0860fd041b4f62dc252de8a84434b6a67663374ba474f" Feb 18 00:37:24 crc kubenswrapper[4858]: E0218 00:37:24.032460 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aa141a313f29a89a1b0860fd041b4f62dc252de8a84434b6a67663374ba474f\": container with ID starting with 1aa141a313f29a89a1b0860fd041b4f62dc252de8a84434b6a67663374ba474f not found: ID does not exist" containerID="1aa141a313f29a89a1b0860fd041b4f62dc252de8a84434b6a67663374ba474f" Feb 18 00:37:24 crc kubenswrapper[4858]: I0218 00:37:24.032488 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aa141a313f29a89a1b0860fd041b4f62dc252de8a84434b6a67663374ba474f"} err="failed to get container status \"1aa141a313f29a89a1b0860fd041b4f62dc252de8a84434b6a67663374ba474f\": rpc error: code = NotFound desc = could not find container \"1aa141a313f29a89a1b0860fd041b4f62dc252de8a84434b6a67663374ba474f\": container with ID starting with 1aa141a313f29a89a1b0860fd041b4f62dc252de8a84434b6a67663374ba474f not found: ID does not exist" Feb 18 00:37:24 crc kubenswrapper[4858]: I0218 00:37:24.969211 4858 generic.go:334] "Generic (PLEG): container finished" podID="e178f906-7b55-43d5-afee-d37997e462ff" containerID="1605da0d7a29e6f22b9a54ae6b8321cf6ae692191f0eb8ea2f8acd7359b7f81a" exitCode=0 Feb 18 00:37:24 crc kubenswrapper[4858]: I0218 00:37:24.969535 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e178f906-7b55-43d5-afee-d37997e462ff","Type":"ContainerDied","Data":"1605da0d7a29e6f22b9a54ae6b8321cf6ae692191f0eb8ea2f8acd7359b7f81a"} Feb 18 00:37:25 crc kubenswrapper[4858]: I0218 00:37:25.001836 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k77hk"] Feb 18 00:37:25 crc kubenswrapper[4858]: I0218 00:37:25.002057 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k77hk" podUID="9999291e-e811-4c10-8720-73bfeb32c3cf" containerName="registry-server" containerID="cri-o://0af248eaa4a6ef7ee65326cee55ff3592d45d03e311ea8da26d590d391b09aab" gracePeriod=2 Feb 18 00:37:25 crc kubenswrapper[4858]: I0218 00:37:25.265501 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:37:25 crc kubenswrapper[4858]: I0218 00:37:25.265575 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:37:25 crc kubenswrapper[4858]: I0218 00:37:25.425788 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7244cf66-b72d-4f2d-a463-89e4e8e37b2c" path="/var/lib/kubelet/pods/7244cf66-b72d-4f2d-a463-89e4e8e37b2c/volumes" Feb 18 00:37:25 crc kubenswrapper[4858]: I0218 00:37:25.603355 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvz6w"] Feb 18 00:37:25 crc kubenswrapper[4858]: I0218 00:37:25.603599 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zvz6w" podUID="054e4a88-251f-4406-bbed-52397e7698b4" containerName="registry-server" containerID="cri-o://b51f3f99ea0349adcf427c036b6f2d0da1fb601e98a13b7e75748137854b03d9" gracePeriod=2 Feb 18 00:37:25 crc kubenswrapper[4858]: I0218 00:37:25.977520 4858 generic.go:334] "Generic (PLEG): container finished" podID="054e4a88-251f-4406-bbed-52397e7698b4" containerID="b51f3f99ea0349adcf427c036b6f2d0da1fb601e98a13b7e75748137854b03d9" exitCode=0 Feb 18 00:37:25 crc kubenswrapper[4858]: I0218 00:37:25.977522 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvz6w" event={"ID":"054e4a88-251f-4406-bbed-52397e7698b4","Type":"ContainerDied","Data":"b51f3f99ea0349adcf427c036b6f2d0da1fb601e98a13b7e75748137854b03d9"} Feb 18 00:37:25 crc kubenswrapper[4858]: I0218 00:37:25.980005 4858 generic.go:334] "Generic (PLEG): container finished" podID="9999291e-e811-4c10-8720-73bfeb32c3cf" containerID="0af248eaa4a6ef7ee65326cee55ff3592d45d03e311ea8da26d590d391b09aab" exitCode=0 Feb 18 00:37:25 crc kubenswrapper[4858]: I0218 00:37:25.980034 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k77hk" event={"ID":"9999291e-e811-4c10-8720-73bfeb32c3cf","Type":"ContainerDied","Data":"0af248eaa4a6ef7ee65326cee55ff3592d45d03e311ea8da26d590d391b09aab"} Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.048629 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvz6w" Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.088186 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k77hk" Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.164059 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9999291e-e811-4c10-8720-73bfeb32c3cf-utilities\") pod \"9999291e-e811-4c10-8720-73bfeb32c3cf\" (UID: \"9999291e-e811-4c10-8720-73bfeb32c3cf\") " Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.164099 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr9m4\" (UniqueName: \"kubernetes.io/projected/9999291e-e811-4c10-8720-73bfeb32c3cf-kube-api-access-nr9m4\") pod \"9999291e-e811-4c10-8720-73bfeb32c3cf\" (UID: \"9999291e-e811-4c10-8720-73bfeb32c3cf\") " Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.164138 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/054e4a88-251f-4406-bbed-52397e7698b4-catalog-content\") pod \"054e4a88-251f-4406-bbed-52397e7698b4\" (UID: \"054e4a88-251f-4406-bbed-52397e7698b4\") " Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.164220 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9999291e-e811-4c10-8720-73bfeb32c3cf-catalog-content\") pod \"9999291e-e811-4c10-8720-73bfeb32c3cf\" (UID: \"9999291e-e811-4c10-8720-73bfeb32c3cf\") " Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.164260 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdlgw\" (UniqueName: \"kubernetes.io/projected/054e4a88-251f-4406-bbed-52397e7698b4-kube-api-access-xdlgw\") pod \"054e4a88-251f-4406-bbed-52397e7698b4\" (UID: \"054e4a88-251f-4406-bbed-52397e7698b4\") " Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.164332 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/054e4a88-251f-4406-bbed-52397e7698b4-utilities\") pod \"054e4a88-251f-4406-bbed-52397e7698b4\" (UID: \"054e4a88-251f-4406-bbed-52397e7698b4\") " Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.165540 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/054e4a88-251f-4406-bbed-52397e7698b4-utilities" (OuterVolumeSpecName: "utilities") pod "054e4a88-251f-4406-bbed-52397e7698b4" (UID: "054e4a88-251f-4406-bbed-52397e7698b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.165763 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9999291e-e811-4c10-8720-73bfeb32c3cf-utilities" (OuterVolumeSpecName: "utilities") pod "9999291e-e811-4c10-8720-73bfeb32c3cf" (UID: "9999291e-e811-4c10-8720-73bfeb32c3cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.169685 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/054e4a88-251f-4406-bbed-52397e7698b4-kube-api-access-xdlgw" (OuterVolumeSpecName: "kube-api-access-xdlgw") pod "054e4a88-251f-4406-bbed-52397e7698b4" (UID: "054e4a88-251f-4406-bbed-52397e7698b4"). InnerVolumeSpecName "kube-api-access-xdlgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.172659 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9999291e-e811-4c10-8720-73bfeb32c3cf-kube-api-access-nr9m4" (OuterVolumeSpecName: "kube-api-access-nr9m4") pod "9999291e-e811-4c10-8720-73bfeb32c3cf" (UID: "9999291e-e811-4c10-8720-73bfeb32c3cf"). InnerVolumeSpecName "kube-api-access-nr9m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.186098 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/054e4a88-251f-4406-bbed-52397e7698b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "054e4a88-251f-4406-bbed-52397e7698b4" (UID: "054e4a88-251f-4406-bbed-52397e7698b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.218465 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9999291e-e811-4c10-8720-73bfeb32c3cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9999291e-e811-4c10-8720-73bfeb32c3cf" (UID: "9999291e-e811-4c10-8720-73bfeb32c3cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.229322 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.265258 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e178f906-7b55-43d5-afee-d37997e462ff-kube-api-access\") pod \"e178f906-7b55-43d5-afee-d37997e462ff\" (UID: \"e178f906-7b55-43d5-afee-d37997e462ff\") " Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.265319 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e178f906-7b55-43d5-afee-d37997e462ff-kubelet-dir\") pod \"e178f906-7b55-43d5-afee-d37997e462ff\" (UID: \"e178f906-7b55-43d5-afee-d37997e462ff\") " Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.265404 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e178f906-7b55-43d5-afee-d37997e462ff-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e178f906-7b55-43d5-afee-d37997e462ff" (UID: "e178f906-7b55-43d5-afee-d37997e462ff"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.265608 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/054e4a88-251f-4406-bbed-52397e7698b4-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.265625 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9999291e-e811-4c10-8720-73bfeb32c3cf-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.265635 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr9m4\" (UniqueName: \"kubernetes.io/projected/9999291e-e811-4c10-8720-73bfeb32c3cf-kube-api-access-nr9m4\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.265645 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/054e4a88-251f-4406-bbed-52397e7698b4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.265653 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9999291e-e811-4c10-8720-73bfeb32c3cf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.265662 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdlgw\" (UniqueName: \"kubernetes.io/projected/054e4a88-251f-4406-bbed-52397e7698b4-kube-api-access-xdlgw\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.265670 4858 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e178f906-7b55-43d5-afee-d37997e462ff-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.270631 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e178f906-7b55-43d5-afee-d37997e462ff-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e178f906-7b55-43d5-afee-d37997e462ff" (UID: "e178f906-7b55-43d5-afee-d37997e462ff"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.366753 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e178f906-7b55-43d5-afee-d37997e462ff-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.986722 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e178f906-7b55-43d5-afee-d37997e462ff","Type":"ContainerDied","Data":"4024adf4bd875feba2f1f69d773afc51e9e7ca81d50f77304308f72216c05ce6"} Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.986759 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4024adf4bd875feba2f1f69d773afc51e9e7ca81d50f77304308f72216c05ce6" Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.986806 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.989527 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k77hk" event={"ID":"9999291e-e811-4c10-8720-73bfeb32c3cf","Type":"ContainerDied","Data":"3ea13f8d029672204f4ee11d5e094823b3a360b896a64186c4ba762408d8c48e"} Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.989576 4858 scope.go:117] "RemoveContainer" containerID="0af248eaa4a6ef7ee65326cee55ff3592d45d03e311ea8da26d590d391b09aab" Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.989639 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k77hk" Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.991838 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvz6w" event={"ID":"054e4a88-251f-4406-bbed-52397e7698b4","Type":"ContainerDied","Data":"89047c9ffcbf1663cd19fdf471497fccf5fa245803dd78a6f0c89b66807284d8"} Feb 18 00:37:26 crc kubenswrapper[4858]: I0218 00:37:26.991920 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvz6w" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.007144 4858 scope.go:117] "RemoveContainer" containerID="7627c0bcb4aa2d07f7f66267d813a851d83e00de8ba75e91f2ba1846fe4074f5" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.020711 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k77hk"] Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.025803 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k77hk"] Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.031581 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvz6w"] Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.034517 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvz6w"] Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.036012 4858 scope.go:117] "RemoveContainer" containerID="e055ca5f64ecdd9f0dee82ffeab2919e8cacabb58b6f8465ba15eb66d8a0fd97" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.043683 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 00:37:27 crc kubenswrapper[4858]: E0218 00:37:27.043917 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7244cf66-b72d-4f2d-a463-89e4e8e37b2c" containerName="extract-content" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.043932 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7244cf66-b72d-4f2d-a463-89e4e8e37b2c" containerName="extract-content" Feb 18 00:37:27 crc kubenswrapper[4858]: E0218 00:37:27.043947 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e178f906-7b55-43d5-afee-d37997e462ff" containerName="pruner" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.043956 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e178f906-7b55-43d5-afee-d37997e462ff" containerName="pruner" Feb 18 00:37:27 crc kubenswrapper[4858]: E0218 00:37:27.043970 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="054e4a88-251f-4406-bbed-52397e7698b4" containerName="extract-utilities" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.043979 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="054e4a88-251f-4406-bbed-52397e7698b4" containerName="extract-utilities" Feb 18 00:37:27 crc kubenswrapper[4858]: E0218 00:37:27.043990 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7244cf66-b72d-4f2d-a463-89e4e8e37b2c" containerName="extract-utilities" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.043999 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7244cf66-b72d-4f2d-a463-89e4e8e37b2c" containerName="extract-utilities" Feb 18 00:37:27 crc kubenswrapper[4858]: E0218 00:37:27.044010 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="054e4a88-251f-4406-bbed-52397e7698b4" containerName="registry-server" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.044021 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="054e4a88-251f-4406-bbed-52397e7698b4" containerName="registry-server" Feb 18 00:37:27 crc kubenswrapper[4858]: E0218 00:37:27.044034 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7244cf66-b72d-4f2d-a463-89e4e8e37b2c" containerName="registry-server" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.044041 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7244cf66-b72d-4f2d-a463-89e4e8e37b2c" containerName="registry-server" Feb 18 00:37:27 crc kubenswrapper[4858]: E0218 00:37:27.044055 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9999291e-e811-4c10-8720-73bfeb32c3cf" containerName="registry-server" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.044063 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9999291e-e811-4c10-8720-73bfeb32c3cf" containerName="registry-server" Feb 18 00:37:27 crc kubenswrapper[4858]: E0218 00:37:27.044076 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9999291e-e811-4c10-8720-73bfeb32c3cf" containerName="extract-utilities" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.044084 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9999291e-e811-4c10-8720-73bfeb32c3cf" containerName="extract-utilities" Feb 18 00:37:27 crc kubenswrapper[4858]: E0218 00:37:27.044096 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="054e4a88-251f-4406-bbed-52397e7698b4" containerName="extract-content" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.044104 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="054e4a88-251f-4406-bbed-52397e7698b4" containerName="extract-content" Feb 18 00:37:27 crc kubenswrapper[4858]: E0218 00:37:27.044114 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9999291e-e811-4c10-8720-73bfeb32c3cf" containerName="extract-content" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.044123 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9999291e-e811-4c10-8720-73bfeb32c3cf" containerName="extract-content" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.044236 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="054e4a88-251f-4406-bbed-52397e7698b4" containerName="registry-server" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.044249 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e178f906-7b55-43d5-afee-d37997e462ff" containerName="pruner" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.044260 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9999291e-e811-4c10-8720-73bfeb32c3cf" containerName="registry-server" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.044277 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7244cf66-b72d-4f2d-a463-89e4e8e37b2c" containerName="registry-server" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.044895 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.048620 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.048955 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.055048 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.056579 4858 scope.go:117] "RemoveContainer" containerID="b51f3f99ea0349adcf427c036b6f2d0da1fb601e98a13b7e75748137854b03d9" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.074395 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cd2955ba-18ae-4daf-bf81-94f5b94b8243-var-lock\") pod \"installer-9-crc\" (UID: \"cd2955ba-18ae-4daf-bf81-94f5b94b8243\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.074470 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd2955ba-18ae-4daf-bf81-94f5b94b8243-kube-api-access\") pod \"installer-9-crc\" (UID: \"cd2955ba-18ae-4daf-bf81-94f5b94b8243\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.075068 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd2955ba-18ae-4daf-bf81-94f5b94b8243-kubelet-dir\") pod \"installer-9-crc\" (UID: \"cd2955ba-18ae-4daf-bf81-94f5b94b8243\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.075528 4858 scope.go:117] "RemoveContainer" containerID="3a1c7b34be6ca6d40eaf7fb868f17a5e4568265b9dc6679a0c0eb21578fc06d9" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.092207 4858 scope.go:117] "RemoveContainer" containerID="b02eebf48fb9cee392d0c043cea1626c0b95bf592291adb262ffb28b56dda150" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.176571 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd2955ba-18ae-4daf-bf81-94f5b94b8243-kube-api-access\") pod \"installer-9-crc\" (UID: \"cd2955ba-18ae-4daf-bf81-94f5b94b8243\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.176910 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd2955ba-18ae-4daf-bf81-94f5b94b8243-kubelet-dir\") pod \"installer-9-crc\" (UID: \"cd2955ba-18ae-4daf-bf81-94f5b94b8243\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.177093 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd2955ba-18ae-4daf-bf81-94f5b94b8243-kubelet-dir\") pod \"installer-9-crc\" (UID: \"cd2955ba-18ae-4daf-bf81-94f5b94b8243\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.177217 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cd2955ba-18ae-4daf-bf81-94f5b94b8243-var-lock\") pod \"installer-9-crc\" (UID: \"cd2955ba-18ae-4daf-bf81-94f5b94b8243\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.177238 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cd2955ba-18ae-4daf-bf81-94f5b94b8243-var-lock\") pod \"installer-9-crc\" (UID: \"cd2955ba-18ae-4daf-bf81-94f5b94b8243\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.193875 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd2955ba-18ae-4daf-bf81-94f5b94b8243-kube-api-access\") pod \"installer-9-crc\" (UID: \"cd2955ba-18ae-4daf-bf81-94f5b94b8243\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.366553 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.403038 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2nkk7"] Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.403685 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2nkk7" podUID="01bb6b52-37a8-45cc-9675-f951757d4934" containerName="registry-server" containerID="cri-o://c108927e9745ab7e29525befe2e3fad3bf738b4636015e55ded101a2edd08ced" gracePeriod=2 Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.437582 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="054e4a88-251f-4406-bbed-52397e7698b4" path="/var/lib/kubelet/pods/054e4a88-251f-4406-bbed-52397e7698b4/volumes" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.442359 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9999291e-e811-4c10-8720-73bfeb32c3cf" path="/var/lib/kubelet/pods/9999291e-e811-4c10-8720-73bfeb32c3cf/volumes" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.772407 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 00:37:27 crc kubenswrapper[4858]: W0218 00:37:27.777033 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcd2955ba_18ae_4daf_bf81_94f5b94b8243.slice/crio-21e14a98eb875ddacef35b75419deb5d5c8fdc1fde4f1a3afcc1007934ecff32 WatchSource:0}: Error finding container 21e14a98eb875ddacef35b75419deb5d5c8fdc1fde4f1a3afcc1007934ecff32: Status 404 returned error can't find the container with id 21e14a98eb875ddacef35b75419deb5d5c8fdc1fde4f1a3afcc1007934ecff32 Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.810821 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2nkk7" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.886572 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01bb6b52-37a8-45cc-9675-f951757d4934-catalog-content\") pod \"01bb6b52-37a8-45cc-9675-f951757d4934\" (UID: \"01bb6b52-37a8-45cc-9675-f951757d4934\") " Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.886685 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01bb6b52-37a8-45cc-9675-f951757d4934-utilities\") pod \"01bb6b52-37a8-45cc-9675-f951757d4934\" (UID: \"01bb6b52-37a8-45cc-9675-f951757d4934\") " Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.886711 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkml8\" (UniqueName: \"kubernetes.io/projected/01bb6b52-37a8-45cc-9675-f951757d4934-kube-api-access-xkml8\") pod \"01bb6b52-37a8-45cc-9675-f951757d4934\" (UID: \"01bb6b52-37a8-45cc-9675-f951757d4934\") " Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.888323 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01bb6b52-37a8-45cc-9675-f951757d4934-utilities" (OuterVolumeSpecName: "utilities") pod "01bb6b52-37a8-45cc-9675-f951757d4934" (UID: "01bb6b52-37a8-45cc-9675-f951757d4934"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.894629 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01bb6b52-37a8-45cc-9675-f951757d4934-kube-api-access-xkml8" (OuterVolumeSpecName: "kube-api-access-xkml8") pod "01bb6b52-37a8-45cc-9675-f951757d4934" (UID: "01bb6b52-37a8-45cc-9675-f951757d4934"). InnerVolumeSpecName "kube-api-access-xkml8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.988616 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01bb6b52-37a8-45cc-9675-f951757d4934-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.988644 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkml8\" (UniqueName: \"kubernetes.io/projected/01bb6b52-37a8-45cc-9675-f951757d4934-kube-api-access-xkml8\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:27 crc kubenswrapper[4858]: I0218 00:37:27.998808 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"cd2955ba-18ae-4daf-bf81-94f5b94b8243","Type":"ContainerStarted","Data":"21e14a98eb875ddacef35b75419deb5d5c8fdc1fde4f1a3afcc1007934ecff32"} Feb 18 00:37:28 crc kubenswrapper[4858]: I0218 00:37:28.000886 4858 generic.go:334] "Generic (PLEG): container finished" podID="01bb6b52-37a8-45cc-9675-f951757d4934" containerID="c108927e9745ab7e29525befe2e3fad3bf738b4636015e55ded101a2edd08ced" exitCode=0 Feb 18 00:37:28 crc kubenswrapper[4858]: I0218 00:37:28.000959 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nkk7" event={"ID":"01bb6b52-37a8-45cc-9675-f951757d4934","Type":"ContainerDied","Data":"c108927e9745ab7e29525befe2e3fad3bf738b4636015e55ded101a2edd08ced"} Feb 18 00:37:28 crc kubenswrapper[4858]: I0218 00:37:28.000993 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nkk7" event={"ID":"01bb6b52-37a8-45cc-9675-f951757d4934","Type":"ContainerDied","Data":"2ff9787fa92db899d0898b79939f6baff6831cfecb839097e19acb14be618dfc"} Feb 18 00:37:28 crc kubenswrapper[4858]: I0218 00:37:28.001023 4858 scope.go:117] "RemoveContainer" containerID="c108927e9745ab7e29525befe2e3fad3bf738b4636015e55ded101a2edd08ced" Feb 18 00:37:28 crc kubenswrapper[4858]: I0218 00:37:28.001150 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2nkk7" Feb 18 00:37:28 crc kubenswrapper[4858]: I0218 00:37:28.004055 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01bb6b52-37a8-45cc-9675-f951757d4934-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01bb6b52-37a8-45cc-9675-f951757d4934" (UID: "01bb6b52-37a8-45cc-9675-f951757d4934"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:37:28 crc kubenswrapper[4858]: I0218 00:37:28.023158 4858 scope.go:117] "RemoveContainer" containerID="3e6fd171f0233284c62132eebc0547ad95068557533846ee79f914c16e8ef255" Feb 18 00:37:28 crc kubenswrapper[4858]: I0218 00:37:28.045379 4858 scope.go:117] "RemoveContainer" containerID="a976f1bc9161968ec256dbe3602076d5131c630bfc40bba6353f3d3c9d7b9157" Feb 18 00:37:28 crc kubenswrapper[4858]: I0218 00:37:28.059782 4858 scope.go:117] "RemoveContainer" containerID="c108927e9745ab7e29525befe2e3fad3bf738b4636015e55ded101a2edd08ced" Feb 18 00:37:28 crc kubenswrapper[4858]: E0218 00:37:28.060220 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c108927e9745ab7e29525befe2e3fad3bf738b4636015e55ded101a2edd08ced\": container with ID starting with c108927e9745ab7e29525befe2e3fad3bf738b4636015e55ded101a2edd08ced not found: ID does not exist" containerID="c108927e9745ab7e29525befe2e3fad3bf738b4636015e55ded101a2edd08ced" Feb 18 00:37:28 crc kubenswrapper[4858]: I0218 00:37:28.060263 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c108927e9745ab7e29525befe2e3fad3bf738b4636015e55ded101a2edd08ced"} err="failed to get container status \"c108927e9745ab7e29525befe2e3fad3bf738b4636015e55ded101a2edd08ced\": rpc error: code = NotFound desc = could not find container \"c108927e9745ab7e29525befe2e3fad3bf738b4636015e55ded101a2edd08ced\": container with ID starting with c108927e9745ab7e29525befe2e3fad3bf738b4636015e55ded101a2edd08ced not found: ID does not exist" Feb 18 00:37:28 crc kubenswrapper[4858]: I0218 00:37:28.060290 4858 scope.go:117] "RemoveContainer" containerID="3e6fd171f0233284c62132eebc0547ad95068557533846ee79f914c16e8ef255" Feb 18 00:37:28 crc kubenswrapper[4858]: E0218 00:37:28.060549 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e6fd171f0233284c62132eebc0547ad95068557533846ee79f914c16e8ef255\": container with ID starting with 3e6fd171f0233284c62132eebc0547ad95068557533846ee79f914c16e8ef255 not found: ID does not exist" containerID="3e6fd171f0233284c62132eebc0547ad95068557533846ee79f914c16e8ef255" Feb 18 00:37:28 crc kubenswrapper[4858]: I0218 00:37:28.060575 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e6fd171f0233284c62132eebc0547ad95068557533846ee79f914c16e8ef255"} err="failed to get container status \"3e6fd171f0233284c62132eebc0547ad95068557533846ee79f914c16e8ef255\": rpc error: code = NotFound desc = could not find container \"3e6fd171f0233284c62132eebc0547ad95068557533846ee79f914c16e8ef255\": container with ID starting with 3e6fd171f0233284c62132eebc0547ad95068557533846ee79f914c16e8ef255 not found: ID does not exist" Feb 18 00:37:28 crc kubenswrapper[4858]: I0218 00:37:28.060595 4858 scope.go:117] "RemoveContainer" containerID="a976f1bc9161968ec256dbe3602076d5131c630bfc40bba6353f3d3c9d7b9157" Feb 18 00:37:28 crc kubenswrapper[4858]: E0218 00:37:28.060901 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a976f1bc9161968ec256dbe3602076d5131c630bfc40bba6353f3d3c9d7b9157\": container with ID starting with a976f1bc9161968ec256dbe3602076d5131c630bfc40bba6353f3d3c9d7b9157 not found: ID does not exist" containerID="a976f1bc9161968ec256dbe3602076d5131c630bfc40bba6353f3d3c9d7b9157" Feb 18 00:37:28 crc kubenswrapper[4858]: I0218 00:37:28.060921 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a976f1bc9161968ec256dbe3602076d5131c630bfc40bba6353f3d3c9d7b9157"} err="failed to get container status \"a976f1bc9161968ec256dbe3602076d5131c630bfc40bba6353f3d3c9d7b9157\": rpc error: code = NotFound desc = could not find container \"a976f1bc9161968ec256dbe3602076d5131c630bfc40bba6353f3d3c9d7b9157\": container with ID starting with a976f1bc9161968ec256dbe3602076d5131c630bfc40bba6353f3d3c9d7b9157 not found: ID does not exist" Feb 18 00:37:28 crc kubenswrapper[4858]: I0218 00:37:28.089949 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01bb6b52-37a8-45cc-9675-f951757d4934-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:28 crc kubenswrapper[4858]: I0218 00:37:28.325639 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2nkk7"] Feb 18 00:37:28 crc kubenswrapper[4858]: I0218 00:37:28.328951 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2nkk7"] Feb 18 00:37:29 crc kubenswrapper[4858]: I0218 00:37:29.012039 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"cd2955ba-18ae-4daf-bf81-94f5b94b8243","Type":"ContainerStarted","Data":"914bf089d05f2fe753f8b4856360071e3027631a1be613ece78f69ebf8777a94"} Feb 18 00:37:29 crc kubenswrapper[4858]: I0218 00:37:29.028267 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.028247226 podStartE2EDuration="2.028247226s" podCreationTimestamp="2026-02-18 00:37:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:37:29.026802051 +0000 UTC m=+202.332638803" watchObservedRunningTime="2026-02-18 00:37:29.028247226 +0000 UTC m=+202.334083958" Feb 18 00:37:29 crc kubenswrapper[4858]: I0218 00:37:29.426683 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01bb6b52-37a8-45cc-9675-f951757d4934" path="/var/lib/kubelet/pods/01bb6b52-37a8-45cc-9675-f951757d4934/volumes" Feb 18 00:37:32 crc kubenswrapper[4858]: I0218 00:37:32.954225 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qvmxz" Feb 18 00:37:32 crc kubenswrapper[4858]: I0218 00:37:32.998148 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qvmxz" Feb 18 00:37:48 crc kubenswrapper[4858]: I0218 00:37:48.551800 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" podUID="92e95ff1-a825-4d17-825f-f4765353a5f2" containerName="oauth-openshift" containerID="cri-o://ef896e8beecc19f2288d407ae9d629548dbb2dcbed5b1d3053e8e56205c5c8d3" gracePeriod=15 Feb 18 00:37:48 crc kubenswrapper[4858]: I0218 00:37:48.989950 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.080530 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-855d4664c5-ln4ml"] Feb 18 00:37:49 crc kubenswrapper[4858]: E0218 00:37:49.080721 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01bb6b52-37a8-45cc-9675-f951757d4934" containerName="extract-utilities" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.080732 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="01bb6b52-37a8-45cc-9675-f951757d4934" containerName="extract-utilities" Feb 18 00:37:49 crc kubenswrapper[4858]: E0218 00:37:49.080743 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92e95ff1-a825-4d17-825f-f4765353a5f2" containerName="oauth-openshift" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.080750 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="92e95ff1-a825-4d17-825f-f4765353a5f2" containerName="oauth-openshift" Feb 18 00:37:49 crc kubenswrapper[4858]: E0218 00:37:49.080761 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01bb6b52-37a8-45cc-9675-f951757d4934" containerName="extract-content" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.080767 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="01bb6b52-37a8-45cc-9675-f951757d4934" containerName="extract-content" Feb 18 00:37:49 crc kubenswrapper[4858]: E0218 00:37:49.080776 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01bb6b52-37a8-45cc-9675-f951757d4934" containerName="registry-server" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.080781 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="01bb6b52-37a8-45cc-9675-f951757d4934" containerName="registry-server" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.080816 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-service-ca\") pod \"92e95ff1-a825-4d17-825f-f4765353a5f2\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.080883 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="01bb6b52-37a8-45cc-9675-f951757d4934" containerName="registry-server" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.080897 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="92e95ff1-a825-4d17-825f-f4765353a5f2" containerName="oauth-openshift" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.080918 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-cliconfig\") pod \"92e95ff1-a825-4d17-825f-f4765353a5f2\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.080957 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-serving-cert\") pod \"92e95ff1-a825-4d17-825f-f4765353a5f2\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.080991 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-user-template-error\") pod \"92e95ff1-a825-4d17-825f-f4765353a5f2\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.081035 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-user-template-login\") pod \"92e95ff1-a825-4d17-825f-f4765353a5f2\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.081083 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92e95ff1-a825-4d17-825f-f4765353a5f2-audit-dir\") pod \"92e95ff1-a825-4d17-825f-f4765353a5f2\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.081116 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-user-template-provider-selection\") pod \"92e95ff1-a825-4d17-825f-f4765353a5f2\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.081163 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92e95ff1-a825-4d17-825f-f4765353a5f2-audit-policies\") pod \"92e95ff1-a825-4d17-825f-f4765353a5f2\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.081195 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-user-idp-0-file-data\") pod \"92e95ff1-a825-4d17-825f-f4765353a5f2\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.081227 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.081262 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njhvx\" (UniqueName: \"kubernetes.io/projected/92e95ff1-a825-4d17-825f-f4765353a5f2-kube-api-access-njhvx\") pod \"92e95ff1-a825-4d17-825f-f4765353a5f2\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.081305 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-session\") pod \"92e95ff1-a825-4d17-825f-f4765353a5f2\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.081336 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-trusted-ca-bundle\") pod \"92e95ff1-a825-4d17-825f-f4765353a5f2\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.081378 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-router-certs\") pod \"92e95ff1-a825-4d17-825f-f4765353a5f2\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.081446 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-ocp-branding-template\") pod \"92e95ff1-a825-4d17-825f-f4765353a5f2\" (UID: \"92e95ff1-a825-4d17-825f-f4765353a5f2\") " Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.083935 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92e95ff1-a825-4d17-825f-f4765353a5f2-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "92e95ff1-a825-4d17-825f-f4765353a5f2" (UID: "92e95ff1-a825-4d17-825f-f4765353a5f2"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.083977 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "92e95ff1-a825-4d17-825f-f4765353a5f2" (UID: "92e95ff1-a825-4d17-825f-f4765353a5f2"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.084422 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "92e95ff1-a825-4d17-825f-f4765353a5f2" (UID: "92e95ff1-a825-4d17-825f-f4765353a5f2"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.084969 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92e95ff1-a825-4d17-825f-f4765353a5f2-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "92e95ff1-a825-4d17-825f-f4765353a5f2" (UID: "92e95ff1-a825-4d17-825f-f4765353a5f2"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.087921 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "92e95ff1-a825-4d17-825f-f4765353a5f2" (UID: "92e95ff1-a825-4d17-825f-f4765353a5f2"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.093951 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92e95ff1-a825-4d17-825f-f4765353a5f2-kube-api-access-njhvx" (OuterVolumeSpecName: "kube-api-access-njhvx") pod "92e95ff1-a825-4d17-825f-f4765353a5f2" (UID: "92e95ff1-a825-4d17-825f-f4765353a5f2"). InnerVolumeSpecName "kube-api-access-njhvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.094336 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "92e95ff1-a825-4d17-825f-f4765353a5f2" (UID: "92e95ff1-a825-4d17-825f-f4765353a5f2"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.094526 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "92e95ff1-a825-4d17-825f-f4765353a5f2" (UID: "92e95ff1-a825-4d17-825f-f4765353a5f2"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.095008 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "92e95ff1-a825-4d17-825f-f4765353a5f2" (UID: "92e95ff1-a825-4d17-825f-f4765353a5f2"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.097664 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "92e95ff1-a825-4d17-825f-f4765353a5f2" (UID: "92e95ff1-a825-4d17-825f-f4765353a5f2"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.104113 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-855d4664c5-ln4ml"] Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.115801 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "92e95ff1-a825-4d17-825f-f4765353a5f2" (UID: "92e95ff1-a825-4d17-825f-f4765353a5f2"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.119275 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "92e95ff1-a825-4d17-825f-f4765353a5f2" (UID: "92e95ff1-a825-4d17-825f-f4765353a5f2"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.119382 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "92e95ff1-a825-4d17-825f-f4765353a5f2" (UID: "92e95ff1-a825-4d17-825f-f4765353a5f2"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.119759 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "92e95ff1-a825-4d17-825f-f4765353a5f2" (UID: "92e95ff1-a825-4d17-825f-f4765353a5f2"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.170860 4858 generic.go:334] "Generic (PLEG): container finished" podID="92e95ff1-a825-4d17-825f-f4765353a5f2" containerID="ef896e8beecc19f2288d407ae9d629548dbb2dcbed5b1d3053e8e56205c5c8d3" exitCode=0 Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.170908 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" event={"ID":"92e95ff1-a825-4d17-825f-f4765353a5f2","Type":"ContainerDied","Data":"ef896e8beecc19f2288d407ae9d629548dbb2dcbed5b1d3053e8e56205c5c8d3"} Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.170935 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" event={"ID":"92e95ff1-a825-4d17-825f-f4765353a5f2","Type":"ContainerDied","Data":"e93c5ad4705f0bc55db0404ec4ae73703bfedc19dfa10b9c8b24d300db044651"} Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.170954 4858 scope.go:117] "RemoveContainer" containerID="ef896e8beecc19f2288d407ae9d629548dbb2dcbed5b1d3053e8e56205c5c8d3" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.171063 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cjd57" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.182463 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-system-serving-cert\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.182749 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.182780 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.182806 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-system-service-ca\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.182828 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.182851 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-user-template-login\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.182882 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-system-session\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.182912 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c3133671-c0a2-4778-a0de-484e7fdec666-audit-policies\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.182943 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-system-cliconfig\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.182971 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm7dx\" (UniqueName: \"kubernetes.io/projected/c3133671-c0a2-4778-a0de-484e7fdec666-kube-api-access-wm7dx\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.182996 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c3133671-c0a2-4778-a0de-484e7fdec666-audit-dir\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.183019 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-system-router-certs\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.183046 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.183073 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-user-template-error\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.183128 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.183145 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.183158 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.183197 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.183465 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.183479 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.183508 4858 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92e95ff1-a825-4d17-825f-f4765353a5f2-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.183523 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.183534 4858 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92e95ff1-a825-4d17-825f-f4765353a5f2-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.183546 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.183658 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njhvx\" (UniqueName: \"kubernetes.io/projected/92e95ff1-a825-4d17-825f-f4765353a5f2-kube-api-access-njhvx\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.183679 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.183794 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.183824 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92e95ff1-a825-4d17-825f-f4765353a5f2-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.203239 4858 scope.go:117] "RemoveContainer" containerID="ef896e8beecc19f2288d407ae9d629548dbb2dcbed5b1d3053e8e56205c5c8d3" Feb 18 00:37:49 crc kubenswrapper[4858]: E0218 00:37:49.203803 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef896e8beecc19f2288d407ae9d629548dbb2dcbed5b1d3053e8e56205c5c8d3\": container with ID starting with ef896e8beecc19f2288d407ae9d629548dbb2dcbed5b1d3053e8e56205c5c8d3 not found: ID does not exist" containerID="ef896e8beecc19f2288d407ae9d629548dbb2dcbed5b1d3053e8e56205c5c8d3" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.203854 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef896e8beecc19f2288d407ae9d629548dbb2dcbed5b1d3053e8e56205c5c8d3"} err="failed to get container status \"ef896e8beecc19f2288d407ae9d629548dbb2dcbed5b1d3053e8e56205c5c8d3\": rpc error: code = NotFound desc = could not find container \"ef896e8beecc19f2288d407ae9d629548dbb2dcbed5b1d3053e8e56205c5c8d3\": container with ID starting with ef896e8beecc19f2288d407ae9d629548dbb2dcbed5b1d3053e8e56205c5c8d3 not found: ID does not exist" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.205176 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cjd57"] Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.207670 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cjd57"] Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.284773 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-system-cliconfig\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.284837 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm7dx\" (UniqueName: \"kubernetes.io/projected/c3133671-c0a2-4778-a0de-484e7fdec666-kube-api-access-wm7dx\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.284863 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c3133671-c0a2-4778-a0de-484e7fdec666-audit-dir\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.284887 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-system-router-certs\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.284913 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.284940 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-user-template-error\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.284978 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-system-serving-cert\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.285005 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.285003 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c3133671-c0a2-4778-a0de-484e7fdec666-audit-dir\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.285032 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-system-service-ca\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.285296 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.285357 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.285402 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-user-template-login\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.286419 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-system-service-ca\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.286481 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.286844 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-system-cliconfig\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.287003 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-system-session\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.287079 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c3133671-c0a2-4778-a0de-484e7fdec666-audit-policies\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.288258 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c3133671-c0a2-4778-a0de-484e7fdec666-audit-policies\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.290160 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.290913 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-system-router-certs\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.291095 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-system-serving-cert\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.291910 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.292464 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-user-template-error\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.292712 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-system-session\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.293615 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.294292 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c3133671-c0a2-4778-a0de-484e7fdec666-v4-0-config-user-template-login\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.306161 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm7dx\" (UniqueName: \"kubernetes.io/projected/c3133671-c0a2-4778-a0de-484e7fdec666-kube-api-access-wm7dx\") pod \"oauth-openshift-855d4664c5-ln4ml\" (UID: \"c3133671-c0a2-4778-a0de-484e7fdec666\") " pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.430358 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92e95ff1-a825-4d17-825f-f4765353a5f2" path="/var/lib/kubelet/pods/92e95ff1-a825-4d17-825f-f4765353a5f2/volumes" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.451316 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:49 crc kubenswrapper[4858]: I0218 00:37:49.948882 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-855d4664c5-ln4ml"] Feb 18 00:37:50 crc kubenswrapper[4858]: I0218 00:37:50.180318 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" event={"ID":"c3133671-c0a2-4778-a0de-484e7fdec666","Type":"ContainerStarted","Data":"e1631609185780b66079d77d119597a1cc9203ce28527ea389a9981fd39e2d74"} Feb 18 00:37:51 crc kubenswrapper[4858]: I0218 00:37:51.190071 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" event={"ID":"c3133671-c0a2-4778-a0de-484e7fdec666","Type":"ContainerStarted","Data":"162753cd01997db8d75cb3bade43bb85a606b928fee9f726aa9c425515003993"} Feb 18 00:37:51 crc kubenswrapper[4858]: I0218 00:37:51.190734 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:51 crc kubenswrapper[4858]: I0218 00:37:51.196009 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" Feb 18 00:37:51 crc kubenswrapper[4858]: I0218 00:37:51.213168 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-855d4664c5-ln4ml" podStartSLOduration=28.213147728 podStartE2EDuration="28.213147728s" podCreationTimestamp="2026-02-18 00:37:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:37:51.208627607 +0000 UTC m=+224.514464349" watchObservedRunningTime="2026-02-18 00:37:51.213147728 +0000 UTC m=+224.518984460" Feb 18 00:37:55 crc kubenswrapper[4858]: I0218 00:37:55.265653 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:37:55 crc kubenswrapper[4858]: I0218 00:37:55.267869 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:37:55 crc kubenswrapper[4858]: I0218 00:37:55.268201 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 00:37:55 crc kubenswrapper[4858]: I0218 00:37:55.269214 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645"} pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:37:55 crc kubenswrapper[4858]: I0218 00:37:55.269551 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" containerID="cri-o://50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645" gracePeriod=600 Feb 18 00:37:56 crc kubenswrapper[4858]: I0218 00:37:56.233624 4858 generic.go:334] "Generic (PLEG): container finished" podID="7172df49-6116-4968-a2b5-a1afb116568b" containerID="50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645" exitCode=0 Feb 18 00:37:56 crc kubenswrapper[4858]: I0218 00:37:56.233795 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerDied","Data":"50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645"} Feb 18 00:37:56 crc kubenswrapper[4858]: I0218 00:37:56.234098 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerStarted","Data":"dd89a9972872421e0ada8f4abaeaad4802dc5c9d7697434f5a0c48b333e5af6b"} Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.594796 4858 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.596195 4858 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.596231 4858 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 00:38:05 crc kubenswrapper[4858]: E0218 00:38:05.596414 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.596431 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 00:38:05 crc kubenswrapper[4858]: E0218 00:38:05.596452 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.596464 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 00:38:05 crc kubenswrapper[4858]: E0218 00:38:05.596481 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.596520 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 00:38:05 crc kubenswrapper[4858]: E0218 00:38:05.596536 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.596548 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 00:38:05 crc kubenswrapper[4858]: E0218 00:38:05.596569 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.596583 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 00:38:05 crc kubenswrapper[4858]: E0218 00:38:05.596603 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.596615 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.596767 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.596784 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.596798 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.596817 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.596832 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 00:38:05 crc kubenswrapper[4858]: E0218 00:38:05.596980 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.596994 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.597161 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.599853 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.600884 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c" gracePeriod=15 Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.601409 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f" gracePeriod=15 Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.601564 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88" gracePeriod=15 Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.601660 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716" gracePeriod=15 Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.601751 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1" gracePeriod=15 Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.615975 4858 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.616514 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.616565 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.616584 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.616606 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.616622 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.616804 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.616888 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.616949 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.647836 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.718237 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.718546 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.718596 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.718627 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.718657 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.718679 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.718711 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.718733 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.718801 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.718849 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.718877 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.718904 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.718932 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.718981 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.719006 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.719032 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:38:05 crc kubenswrapper[4858]: I0218 00:38:05.943973 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:38:05 crc kubenswrapper[4858]: W0218 00:38:05.975456 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-3347e06faedb01c2d7b492656f53b55b759def8e8b90352d5f91bd5cf7a7531c WatchSource:0}: Error finding container 3347e06faedb01c2d7b492656f53b55b759def8e8b90352d5f91bd5cf7a7531c: Status 404 returned error can't find the container with id 3347e06faedb01c2d7b492656f53b55b759def8e8b90352d5f91bd5cf7a7531c Feb 18 00:38:05 crc kubenswrapper[4858]: E0218 00:38:05.979107 4858 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.12:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189530416bad8448 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:38:05.9776994 +0000 UTC m=+239.283536132,LastTimestamp:2026-02-18 00:38:05.9776994 +0000 UTC m=+239.283536132,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:38:06 crc kubenswrapper[4858]: I0218 00:38:06.308252 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"44de843edab25d62e8a479d23debe6dac8285ea4889e6165cdd0e0f6c5901a6d"} Feb 18 00:38:06 crc kubenswrapper[4858]: I0218 00:38:06.308320 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"3347e06faedb01c2d7b492656f53b55b759def8e8b90352d5f91bd5cf7a7531c"} Feb 18 00:38:06 crc kubenswrapper[4858]: I0218 00:38:06.310482 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:06 crc kubenswrapper[4858]: I0218 00:38:06.313554 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 00:38:06 crc kubenswrapper[4858]: I0218 00:38:06.315368 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 00:38:06 crc kubenswrapper[4858]: I0218 00:38:06.316421 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f" exitCode=0 Feb 18 00:38:06 crc kubenswrapper[4858]: I0218 00:38:06.316457 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88" exitCode=0 Feb 18 00:38:06 crc kubenswrapper[4858]: I0218 00:38:06.316472 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716" exitCode=0 Feb 18 00:38:06 crc kubenswrapper[4858]: I0218 00:38:06.316487 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1" exitCode=2 Feb 18 00:38:06 crc kubenswrapper[4858]: I0218 00:38:06.316558 4858 scope.go:117] "RemoveContainer" containerID="29168fd7d532fe3b640a912eeb286c379a2e3847dae8bb21ee613e488f40b167" Feb 18 00:38:06 crc kubenswrapper[4858]: I0218 00:38:06.318912 4858 generic.go:334] "Generic (PLEG): container finished" podID="cd2955ba-18ae-4daf-bf81-94f5b94b8243" containerID="914bf089d05f2fe753f8b4856360071e3027631a1be613ece78f69ebf8777a94" exitCode=0 Feb 18 00:38:06 crc kubenswrapper[4858]: I0218 00:38:06.318962 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"cd2955ba-18ae-4daf-bf81-94f5b94b8243","Type":"ContainerDied","Data":"914bf089d05f2fe753f8b4856360071e3027631a1be613ece78f69ebf8777a94"} Feb 18 00:38:06 crc kubenswrapper[4858]: I0218 00:38:06.319758 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:06 crc kubenswrapper[4858]: I0218 00:38:06.320684 4858 status_manager.go:851] "Failed to get status for pod" podUID="cd2955ba-18ae-4daf-bf81-94f5b94b8243" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:06 crc kubenswrapper[4858]: E0218 00:38:06.355308 4858 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:06 crc kubenswrapper[4858]: E0218 00:38:06.356792 4858 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:06 crc kubenswrapper[4858]: E0218 00:38:06.357341 4858 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:06 crc kubenswrapper[4858]: E0218 00:38:06.357867 4858 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:06 crc kubenswrapper[4858]: E0218 00:38:06.358582 4858 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:06 crc kubenswrapper[4858]: I0218 00:38:06.358751 4858 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 18 00:38:06 crc kubenswrapper[4858]: E0218 00:38:06.360401 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="200ms" Feb 18 00:38:06 crc kubenswrapper[4858]: E0218 00:38:06.561259 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="400ms" Feb 18 00:38:06 crc kubenswrapper[4858]: E0218 00:38:06.974789 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="800ms" Feb 18 00:38:07 crc kubenswrapper[4858]: I0218 00:38:07.330651 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 00:38:07 crc kubenswrapper[4858]: I0218 00:38:07.442795 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:07 crc kubenswrapper[4858]: I0218 00:38:07.443725 4858 status_manager.go:851] "Failed to get status for pod" podUID="cd2955ba-18ae-4daf-bf81-94f5b94b8243" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:07 crc kubenswrapper[4858]: I0218 00:38:07.604716 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 00:38:07 crc kubenswrapper[4858]: I0218 00:38:07.605667 4858 status_manager.go:851] "Failed to get status for pod" podUID="cd2955ba-18ae-4daf-bf81-94f5b94b8243" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:07 crc kubenswrapper[4858]: I0218 00:38:07.606019 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:07 crc kubenswrapper[4858]: I0218 00:38:07.745261 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cd2955ba-18ae-4daf-bf81-94f5b94b8243-var-lock\") pod \"cd2955ba-18ae-4daf-bf81-94f5b94b8243\" (UID: \"cd2955ba-18ae-4daf-bf81-94f5b94b8243\") " Feb 18 00:38:07 crc kubenswrapper[4858]: I0218 00:38:07.745345 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd2955ba-18ae-4daf-bf81-94f5b94b8243-var-lock" (OuterVolumeSpecName: "var-lock") pod "cd2955ba-18ae-4daf-bf81-94f5b94b8243" (UID: "cd2955ba-18ae-4daf-bf81-94f5b94b8243"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:38:07 crc kubenswrapper[4858]: I0218 00:38:07.745364 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd2955ba-18ae-4daf-bf81-94f5b94b8243-kubelet-dir\") pod \"cd2955ba-18ae-4daf-bf81-94f5b94b8243\" (UID: \"cd2955ba-18ae-4daf-bf81-94f5b94b8243\") " Feb 18 00:38:07 crc kubenswrapper[4858]: I0218 00:38:07.745403 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd2955ba-18ae-4daf-bf81-94f5b94b8243-kube-api-access\") pod \"cd2955ba-18ae-4daf-bf81-94f5b94b8243\" (UID: \"cd2955ba-18ae-4daf-bf81-94f5b94b8243\") " Feb 18 00:38:07 crc kubenswrapper[4858]: I0218 00:38:07.745519 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd2955ba-18ae-4daf-bf81-94f5b94b8243-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cd2955ba-18ae-4daf-bf81-94f5b94b8243" (UID: "cd2955ba-18ae-4daf-bf81-94f5b94b8243"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:38:07 crc kubenswrapper[4858]: I0218 00:38:07.745689 4858 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cd2955ba-18ae-4daf-bf81-94f5b94b8243-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:07 crc kubenswrapper[4858]: I0218 00:38:07.745706 4858 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd2955ba-18ae-4daf-bf81-94f5b94b8243-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:07 crc kubenswrapper[4858]: I0218 00:38:07.774043 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd2955ba-18ae-4daf-bf81-94f5b94b8243-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cd2955ba-18ae-4daf-bf81-94f5b94b8243" (UID: "cd2955ba-18ae-4daf-bf81-94f5b94b8243"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:38:07 crc kubenswrapper[4858]: E0218 00:38:07.775446 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="1.6s" Feb 18 00:38:07 crc kubenswrapper[4858]: I0218 00:38:07.846667 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd2955ba-18ae-4daf-bf81-94f5b94b8243-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:07 crc kubenswrapper[4858]: I0218 00:38:07.995311 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 00:38:07 crc kubenswrapper[4858]: I0218 00:38:07.995979 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:38:07 crc kubenswrapper[4858]: I0218 00:38:07.996526 4858 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:07 crc kubenswrapper[4858]: I0218 00:38:07.996966 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:07 crc kubenswrapper[4858]: I0218 00:38:07.997536 4858 status_manager.go:851] "Failed to get status for pod" podUID="cd2955ba-18ae-4daf-bf81-94f5b94b8243" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.150279 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.150420 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.150473 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.150571 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.150597 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.150684 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.150913 4858 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.150945 4858 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.150962 4858 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.343452 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"cd2955ba-18ae-4daf-bf81-94f5b94b8243","Type":"ContainerDied","Data":"21e14a98eb875ddacef35b75419deb5d5c8fdc1fde4f1a3afcc1007934ecff32"} Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.343744 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21e14a98eb875ddacef35b75419deb5d5c8fdc1fde4f1a3afcc1007934ecff32" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.343571 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.347028 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.347711 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c" exitCode=0 Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.347766 4858 scope.go:117] "RemoveContainer" containerID="c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.347910 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.358481 4858 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.358838 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.359067 4858 status_manager.go:851] "Failed to get status for pod" podUID="cd2955ba-18ae-4daf-bf81-94f5b94b8243" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.361220 4858 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.361426 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.361634 4858 status_manager.go:851] "Failed to get status for pod" podUID="cd2955ba-18ae-4daf-bf81-94f5b94b8243" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.377106 4858 scope.go:117] "RemoveContainer" containerID="d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.391112 4858 scope.go:117] "RemoveContainer" containerID="271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.409869 4858 scope.go:117] "RemoveContainer" containerID="943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.426249 4858 scope.go:117] "RemoveContainer" containerID="c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.446134 4858 scope.go:117] "RemoveContainer" containerID="5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.463762 4858 scope.go:117] "RemoveContainer" containerID="c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f" Feb 18 00:38:08 crc kubenswrapper[4858]: E0218 00:38:08.464250 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\": container with ID starting with c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f not found: ID does not exist" containerID="c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.464284 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f"} err="failed to get container status \"c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\": rpc error: code = NotFound desc = could not find container \"c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f\": container with ID starting with c24949e8feba88d75ece85f628ded04871a18566921399c6fdcccfc6c00ac55f not found: ID does not exist" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.464309 4858 scope.go:117] "RemoveContainer" containerID="d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88" Feb 18 00:38:08 crc kubenswrapper[4858]: E0218 00:38:08.464779 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\": container with ID starting with d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88 not found: ID does not exist" containerID="d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.464808 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88"} err="failed to get container status \"d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\": rpc error: code = NotFound desc = could not find container \"d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88\": container with ID starting with d3dd0dd43ef1faa34d18da15c63fc849fb0681a20ebbf91ce45861603fe88b88 not found: ID does not exist" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.464826 4858 scope.go:117] "RemoveContainer" containerID="271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716" Feb 18 00:38:08 crc kubenswrapper[4858]: E0218 00:38:08.465557 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\": container with ID starting with 271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716 not found: ID does not exist" containerID="271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.465580 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716"} err="failed to get container status \"271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\": rpc error: code = NotFound desc = could not find container \"271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716\": container with ID starting with 271077fb93fc78c4fa6098f31a066c48ae3b64fd60c6aeff5c2aa530204ca716 not found: ID does not exist" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.465596 4858 scope.go:117] "RemoveContainer" containerID="943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1" Feb 18 00:38:08 crc kubenswrapper[4858]: E0218 00:38:08.466693 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\": container with ID starting with 943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1 not found: ID does not exist" containerID="943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.466717 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1"} err="failed to get container status \"943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\": rpc error: code = NotFound desc = could not find container \"943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1\": container with ID starting with 943536bb47e39832283224c348ca3c27ad0ffa70e95cd6d623a3dd33613b37d1 not found: ID does not exist" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.466735 4858 scope.go:117] "RemoveContainer" containerID="c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c" Feb 18 00:38:08 crc kubenswrapper[4858]: E0218 00:38:08.467169 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\": container with ID starting with c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c not found: ID does not exist" containerID="c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.467195 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c"} err="failed to get container status \"c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\": rpc error: code = NotFound desc = could not find container \"c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c\": container with ID starting with c127a93f6622656f4508b3ec879136e8b68fe1afb0330a6cd38c1ef5a10a0e3c not found: ID does not exist" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.467211 4858 scope.go:117] "RemoveContainer" containerID="5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a" Feb 18 00:38:08 crc kubenswrapper[4858]: E0218 00:38:08.467608 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\": container with ID starting with 5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a not found: ID does not exist" containerID="5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a" Feb 18 00:38:08 crc kubenswrapper[4858]: I0218 00:38:08.467628 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a"} err="failed to get container status \"5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\": rpc error: code = NotFound desc = could not find container \"5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a\": container with ID starting with 5aa550407449c10c5f491e2e161f1ea580f930c199008745b730185082ba9f0a not found: ID does not exist" Feb 18 00:38:09 crc kubenswrapper[4858]: E0218 00:38:09.376712 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="3.2s" Feb 18 00:38:09 crc kubenswrapper[4858]: E0218 00:38:09.397119 4858 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.12:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189530416bad8448 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:38:05.9776994 +0000 UTC m=+239.283536132,LastTimestamp:2026-02-18 00:38:05.9776994 +0000 UTC m=+239.283536132,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:38:09 crc kubenswrapper[4858]: I0218 00:38:09.427237 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 18 00:38:12 crc kubenswrapper[4858]: E0218 00:38:12.264736 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:38:12Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:38:12Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:38:12Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:38:12Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:12 crc kubenswrapper[4858]: E0218 00:38:12.265290 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:12 crc kubenswrapper[4858]: E0218 00:38:12.265586 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:12 crc kubenswrapper[4858]: E0218 00:38:12.266010 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:12 crc kubenswrapper[4858]: E0218 00:38:12.266518 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:12 crc kubenswrapper[4858]: E0218 00:38:12.266547 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 00:38:12 crc kubenswrapper[4858]: E0218 00:38:12.578941 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="6.4s" Feb 18 00:38:17 crc kubenswrapper[4858]: I0218 00:38:17.422705 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:17 crc kubenswrapper[4858]: I0218 00:38:17.423613 4858 status_manager.go:851] "Failed to get status for pod" podUID="cd2955ba-18ae-4daf-bf81-94f5b94b8243" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:18 crc kubenswrapper[4858]: I0218 00:38:18.405593 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 00:38:18 crc kubenswrapper[4858]: I0218 00:38:18.405908 4858 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee" exitCode=1 Feb 18 00:38:18 crc kubenswrapper[4858]: I0218 00:38:18.405943 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee"} Feb 18 00:38:18 crc kubenswrapper[4858]: I0218 00:38:18.406474 4858 scope.go:117] "RemoveContainer" containerID="b291eb2ce3843e4e9d36289c6bb96b472d1cb947ea434597f6c2e94279bac6ee" Feb 18 00:38:18 crc kubenswrapper[4858]: I0218 00:38:18.407394 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:18 crc kubenswrapper[4858]: I0218 00:38:18.407935 4858 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:18 crc kubenswrapper[4858]: I0218 00:38:18.409277 4858 status_manager.go:851] "Failed to get status for pod" podUID="cd2955ba-18ae-4daf-bf81-94f5b94b8243" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:18 crc kubenswrapper[4858]: E0218 00:38:18.979857 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="7s" Feb 18 00:38:19 crc kubenswrapper[4858]: E0218 00:38:19.398863 4858 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.12:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189530416bad8448 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:38:05.9776994 +0000 UTC m=+239.283536132,LastTimestamp:2026-02-18 00:38:05.9776994 +0000 UTC m=+239.283536132,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:38:19 crc kubenswrapper[4858]: I0218 00:38:19.417942 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 00:38:19 crc kubenswrapper[4858]: I0218 00:38:19.418036 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"80e8670661b6266e5a7146dce2ab1659083f9997bd57c9c423aa43351b1c6208"} Feb 18 00:38:19 crc kubenswrapper[4858]: I0218 00:38:19.420049 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:19 crc kubenswrapper[4858]: I0218 00:38:19.420554 4858 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:19 crc kubenswrapper[4858]: I0218 00:38:19.422391 4858 status_manager.go:851] "Failed to get status for pod" podUID="cd2955ba-18ae-4daf-bf81-94f5b94b8243" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:19 crc kubenswrapper[4858]: I0218 00:38:19.703182 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:38:20 crc kubenswrapper[4858]: I0218 00:38:20.430833 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:38:20 crc kubenswrapper[4858]: I0218 00:38:20.435377 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:20 crc kubenswrapper[4858]: I0218 00:38:20.436114 4858 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:20 crc kubenswrapper[4858]: I0218 00:38:20.453628 4858 status_manager.go:851] "Failed to get status for pod" podUID="cd2955ba-18ae-4daf-bf81-94f5b94b8243" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:20 crc kubenswrapper[4858]: I0218 00:38:20.457711 4858 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="95ba33b5-7799-44ab-8de6-451433944bb8" Feb 18 00:38:20 crc kubenswrapper[4858]: I0218 00:38:20.457748 4858 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="95ba33b5-7799-44ab-8de6-451433944bb8" Feb 18 00:38:20 crc kubenswrapper[4858]: E0218 00:38:20.458120 4858 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:38:20 crc kubenswrapper[4858]: I0218 00:38:20.458832 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:38:21 crc kubenswrapper[4858]: I0218 00:38:21.445338 4858 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="8a909825f85f6293cdb2a6c49282da48dddfc7ed8c1d09aa232501a3398875b9" exitCode=0 Feb 18 00:38:21 crc kubenswrapper[4858]: I0218 00:38:21.445462 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"8a909825f85f6293cdb2a6c49282da48dddfc7ed8c1d09aa232501a3398875b9"} Feb 18 00:38:21 crc kubenswrapper[4858]: I0218 00:38:21.445886 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"64a6a6c8b87412d3b8a05ffca9767618bdc7c45ce031e08cdaf56d9a53856bc4"} Feb 18 00:38:21 crc kubenswrapper[4858]: I0218 00:38:21.447022 4858 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="95ba33b5-7799-44ab-8de6-451433944bb8" Feb 18 00:38:21 crc kubenswrapper[4858]: I0218 00:38:21.447046 4858 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="95ba33b5-7799-44ab-8de6-451433944bb8" Feb 18 00:38:21 crc kubenswrapper[4858]: I0218 00:38:21.447461 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:21 crc kubenswrapper[4858]: E0218 00:38:21.447900 4858 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:38:21 crc kubenswrapper[4858]: I0218 00:38:21.448863 4858 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:21 crc kubenswrapper[4858]: I0218 00:38:21.449438 4858 status_manager.go:851] "Failed to get status for pod" podUID="cd2955ba-18ae-4daf-bf81-94f5b94b8243" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Feb 18 00:38:22 crc kubenswrapper[4858]: I0218 00:38:22.456603 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2d0e37c80ac0defc8287dd2dfaeca37300efd3a2ec84e0ca83cf08765230b44c"} Feb 18 00:38:22 crc kubenswrapper[4858]: I0218 00:38:22.456852 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f2859029eb3d5f1a6f7e9ec9d9846134b2ef24bab56eb0a434d9f69929808169"} Feb 18 00:38:22 crc kubenswrapper[4858]: I0218 00:38:22.456863 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e211a91c09a3d799ef615acd8a562da345641d68ba0cd016f37b895810c7f3ac"} Feb 18 00:38:23 crc kubenswrapper[4858]: I0218 00:38:23.468079 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"01d48f2a1b290d5b5bc53ac02ec67651eede6b614c3ea1e8c4fb445ea286681b"} Feb 18 00:38:23 crc kubenswrapper[4858]: I0218 00:38:23.469011 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:38:23 crc kubenswrapper[4858]: I0218 00:38:23.469104 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5828613b87b9c78feb9717611771794df0d967751b25bde374990133bdda5a1e"} Feb 18 00:38:23 crc kubenswrapper[4858]: I0218 00:38:23.468328 4858 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="95ba33b5-7799-44ab-8de6-451433944bb8" Feb 18 00:38:23 crc kubenswrapper[4858]: I0218 00:38:23.469244 4858 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="95ba33b5-7799-44ab-8de6-451433944bb8" Feb 18 00:38:24 crc kubenswrapper[4858]: I0218 00:38:24.174363 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:38:24 crc kubenswrapper[4858]: I0218 00:38:24.183442 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:38:25 crc kubenswrapper[4858]: I0218 00:38:25.459776 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:38:25 crc kubenswrapper[4858]: I0218 00:38:25.459900 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:38:25 crc kubenswrapper[4858]: I0218 00:38:25.468254 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:38:28 crc kubenswrapper[4858]: I0218 00:38:28.477421 4858 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:38:28 crc kubenswrapper[4858]: I0218 00:38:28.500417 4858 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="95ba33b5-7799-44ab-8de6-451433944bb8" Feb 18 00:38:28 crc kubenswrapper[4858]: I0218 00:38:28.500673 4858 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="95ba33b5-7799-44ab-8de6-451433944bb8" Feb 18 00:38:28 crc kubenswrapper[4858]: I0218 00:38:28.507538 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:38:28 crc kubenswrapper[4858]: I0218 00:38:28.541902 4858 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6dab154b-ebb6-48fa-89df-7b1e0731aa61" Feb 18 00:38:29 crc kubenswrapper[4858]: I0218 00:38:29.506941 4858 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="95ba33b5-7799-44ab-8de6-451433944bb8" Feb 18 00:38:29 crc kubenswrapper[4858]: I0218 00:38:29.506992 4858 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="95ba33b5-7799-44ab-8de6-451433944bb8" Feb 18 00:38:29 crc kubenswrapper[4858]: I0218 00:38:29.510283 4858 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6dab154b-ebb6-48fa-89df-7b1e0731aa61" Feb 18 00:38:29 crc kubenswrapper[4858]: I0218 00:38:29.707048 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:38:37 crc kubenswrapper[4858]: I0218 00:38:37.967858 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 00:38:38 crc kubenswrapper[4858]: I0218 00:38:38.817902 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 00:38:39 crc kubenswrapper[4858]: I0218 00:38:39.215886 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 00:38:39 crc kubenswrapper[4858]: I0218 00:38:39.812023 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 00:38:39 crc kubenswrapper[4858]: I0218 00:38:39.991593 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 00:38:40 crc kubenswrapper[4858]: I0218 00:38:40.111784 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 00:38:40 crc kubenswrapper[4858]: I0218 00:38:40.255509 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 00:38:40 crc kubenswrapper[4858]: I0218 00:38:40.270383 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 00:38:40 crc kubenswrapper[4858]: I0218 00:38:40.272182 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 00:38:40 crc kubenswrapper[4858]: I0218 00:38:40.536404 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 00:38:40 crc kubenswrapper[4858]: I0218 00:38:40.703715 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 00:38:40 crc kubenswrapper[4858]: I0218 00:38:40.761968 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 00:38:40 crc kubenswrapper[4858]: I0218 00:38:40.914968 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 00:38:40 crc kubenswrapper[4858]: I0218 00:38:40.941231 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 00:38:41 crc kubenswrapper[4858]: I0218 00:38:41.008435 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 00:38:41 crc kubenswrapper[4858]: I0218 00:38:41.020916 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 00:38:41 crc kubenswrapper[4858]: I0218 00:38:41.134949 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 00:38:41 crc kubenswrapper[4858]: I0218 00:38:41.144477 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 00:38:41 crc kubenswrapper[4858]: I0218 00:38:41.234465 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 00:38:41 crc kubenswrapper[4858]: I0218 00:38:41.293615 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 00:38:41 crc kubenswrapper[4858]: I0218 00:38:41.299518 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 00:38:41 crc kubenswrapper[4858]: I0218 00:38:41.368088 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 00:38:41 crc kubenswrapper[4858]: I0218 00:38:41.448318 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 00:38:41 crc kubenswrapper[4858]: I0218 00:38:41.466977 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 00:38:41 crc kubenswrapper[4858]: I0218 00:38:41.540265 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 00:38:41 crc kubenswrapper[4858]: I0218 00:38:41.618768 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 00:38:41 crc kubenswrapper[4858]: I0218 00:38:41.636567 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 00:38:41 crc kubenswrapper[4858]: I0218 00:38:41.659764 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 00:38:41 crc kubenswrapper[4858]: I0218 00:38:41.673341 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 00:38:41 crc kubenswrapper[4858]: I0218 00:38:41.863469 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 00:38:41 crc kubenswrapper[4858]: I0218 00:38:41.882586 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 00:38:41 crc kubenswrapper[4858]: I0218 00:38:41.958398 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 00:38:42 crc kubenswrapper[4858]: I0218 00:38:42.063225 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 00:38:42 crc kubenswrapper[4858]: I0218 00:38:42.080740 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 00:38:42 crc kubenswrapper[4858]: I0218 00:38:42.084370 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 00:38:42 crc kubenswrapper[4858]: I0218 00:38:42.121599 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 00:38:42 crc kubenswrapper[4858]: I0218 00:38:42.167489 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 00:38:42 crc kubenswrapper[4858]: I0218 00:38:42.267597 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 00:38:42 crc kubenswrapper[4858]: I0218 00:38:42.280294 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 00:38:42 crc kubenswrapper[4858]: I0218 00:38:42.306814 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 00:38:42 crc kubenswrapper[4858]: I0218 00:38:42.333338 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 00:38:42 crc kubenswrapper[4858]: I0218 00:38:42.372533 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 00:38:42 crc kubenswrapper[4858]: I0218 00:38:42.406949 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 00:38:42 crc kubenswrapper[4858]: I0218 00:38:42.526644 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 00:38:42 crc kubenswrapper[4858]: I0218 00:38:42.843099 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 00:38:42 crc kubenswrapper[4858]: I0218 00:38:42.948124 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 00:38:42 crc kubenswrapper[4858]: I0218 00:38:42.988444 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.086339 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.113705 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.113848 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.117119 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.151591 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.154245 4858 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.190605 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.214248 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.251825 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.281861 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.336166 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.355583 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.356199 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.378958 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.567735 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.589106 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.670913 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.720692 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.747151 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.759014 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.765351 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.787568 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.803398 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.851678 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.852415 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.863234 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.944546 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 00:38:43 crc kubenswrapper[4858]: I0218 00:38:43.973367 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 00:38:44 crc kubenswrapper[4858]: I0218 00:38:44.018085 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 00:38:44 crc kubenswrapper[4858]: I0218 00:38:44.059341 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 00:38:44 crc kubenswrapper[4858]: I0218 00:38:44.161527 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 00:38:44 crc kubenswrapper[4858]: I0218 00:38:44.163890 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 00:38:44 crc kubenswrapper[4858]: I0218 00:38:44.192843 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 00:38:44 crc kubenswrapper[4858]: I0218 00:38:44.272674 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 00:38:44 crc kubenswrapper[4858]: I0218 00:38:44.326656 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 00:38:44 crc kubenswrapper[4858]: I0218 00:38:44.413109 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 00:38:44 crc kubenswrapper[4858]: I0218 00:38:44.435563 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 00:38:44 crc kubenswrapper[4858]: I0218 00:38:44.504857 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 00:38:44 crc kubenswrapper[4858]: I0218 00:38:44.522785 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 00:38:44 crc kubenswrapper[4858]: I0218 00:38:44.543434 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 00:38:44 crc kubenswrapper[4858]: I0218 00:38:44.618835 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 00:38:44 crc kubenswrapper[4858]: I0218 00:38:44.622659 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 00:38:44 crc kubenswrapper[4858]: I0218 00:38:44.623533 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 00:38:44 crc kubenswrapper[4858]: I0218 00:38:44.631869 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 00:38:44 crc kubenswrapper[4858]: I0218 00:38:44.661112 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 00:38:44 crc kubenswrapper[4858]: I0218 00:38:44.727905 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 00:38:44 crc kubenswrapper[4858]: I0218 00:38:44.734327 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 00:38:44 crc kubenswrapper[4858]: I0218 00:38:44.741613 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 00:38:44 crc kubenswrapper[4858]: I0218 00:38:44.752422 4858 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 00:38:44 crc kubenswrapper[4858]: I0218 00:38:44.828347 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 00:38:44 crc kubenswrapper[4858]: I0218 00:38:44.850670 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 00:38:44 crc kubenswrapper[4858]: I0218 00:38:44.865297 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 00:38:44 crc kubenswrapper[4858]: I0218 00:38:44.873155 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.101785 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.135709 4858 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.145801 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.148582 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.188612 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.194306 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.208148 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.232257 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.257830 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.293754 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.310016 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.324531 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.405516 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.409800 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.435330 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.435378 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.512746 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.544997 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.597185 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.751370 4858 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.755935 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=40.755911644 podStartE2EDuration="40.755911644s" podCreationTimestamp="2026-02-18 00:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:38:28.487113511 +0000 UTC m=+261.792950243" watchObservedRunningTime="2026-02-18 00:38:45.755911644 +0000 UTC m=+279.061748406" Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.759172 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.759232 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.765657 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.789133 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=17.789108688 podStartE2EDuration="17.789108688s" podCreationTimestamp="2026-02-18 00:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:38:45.777415411 +0000 UTC m=+279.083252153" watchObservedRunningTime="2026-02-18 00:38:45.789108688 +0000 UTC m=+279.094945430" Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.813781 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 00:38:45 crc kubenswrapper[4858]: I0218 00:38:45.952388 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 00:38:46 crc kubenswrapper[4858]: I0218 00:38:46.015919 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 00:38:46 crc kubenswrapper[4858]: I0218 00:38:46.059039 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 00:38:46 crc kubenswrapper[4858]: I0218 00:38:46.117153 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 00:38:46 crc kubenswrapper[4858]: I0218 00:38:46.198255 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 00:38:46 crc kubenswrapper[4858]: I0218 00:38:46.202412 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 00:38:46 crc kubenswrapper[4858]: I0218 00:38:46.283680 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 00:38:46 crc kubenswrapper[4858]: I0218 00:38:46.293402 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 00:38:46 crc kubenswrapper[4858]: I0218 00:38:46.291477 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 00:38:46 crc kubenswrapper[4858]: I0218 00:38:46.375129 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 00:38:46 crc kubenswrapper[4858]: I0218 00:38:46.413581 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 00:38:46 crc kubenswrapper[4858]: I0218 00:38:46.499479 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 00:38:46 crc kubenswrapper[4858]: I0218 00:38:46.551354 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 00:38:46 crc kubenswrapper[4858]: I0218 00:38:46.553607 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 00:38:46 crc kubenswrapper[4858]: I0218 00:38:46.757473 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 00:38:46 crc kubenswrapper[4858]: I0218 00:38:46.785868 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 00:38:47 crc kubenswrapper[4858]: I0218 00:38:47.070858 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 00:38:47 crc kubenswrapper[4858]: I0218 00:38:47.112607 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 00:38:47 crc kubenswrapper[4858]: I0218 00:38:47.173652 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 00:38:47 crc kubenswrapper[4858]: I0218 00:38:47.218489 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 00:38:47 crc kubenswrapper[4858]: I0218 00:38:47.346131 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 00:38:47 crc kubenswrapper[4858]: I0218 00:38:47.422598 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 00:38:47 crc kubenswrapper[4858]: I0218 00:38:47.486945 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 00:38:47 crc kubenswrapper[4858]: I0218 00:38:47.529412 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 00:38:47 crc kubenswrapper[4858]: I0218 00:38:47.536884 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 00:38:47 crc kubenswrapper[4858]: I0218 00:38:47.591533 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 00:38:47 crc kubenswrapper[4858]: I0218 00:38:47.734957 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 00:38:47 crc kubenswrapper[4858]: I0218 00:38:47.761737 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 00:38:47 crc kubenswrapper[4858]: I0218 00:38:47.772614 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 00:38:47 crc kubenswrapper[4858]: I0218 00:38:47.796854 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 00:38:47 crc kubenswrapper[4858]: I0218 00:38:47.883040 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 00:38:47 crc kubenswrapper[4858]: I0218 00:38:47.941890 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 00:38:47 crc kubenswrapper[4858]: I0218 00:38:47.944546 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 00:38:47 crc kubenswrapper[4858]: I0218 00:38:47.966100 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 00:38:48 crc kubenswrapper[4858]: I0218 00:38:48.053626 4858 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 00:38:48 crc kubenswrapper[4858]: I0218 00:38:48.157125 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 00:38:48 crc kubenswrapper[4858]: I0218 00:38:48.279922 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 00:38:48 crc kubenswrapper[4858]: I0218 00:38:48.310348 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 00:38:48 crc kubenswrapper[4858]: I0218 00:38:48.383980 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 00:38:48 crc kubenswrapper[4858]: I0218 00:38:48.397095 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 00:38:48 crc kubenswrapper[4858]: I0218 00:38:48.401912 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 00:38:48 crc kubenswrapper[4858]: I0218 00:38:48.495821 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 00:38:48 crc kubenswrapper[4858]: I0218 00:38:48.497752 4858 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 00:38:48 crc kubenswrapper[4858]: I0218 00:38:48.515882 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 00:38:48 crc kubenswrapper[4858]: I0218 00:38:48.617206 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 00:38:48 crc kubenswrapper[4858]: I0218 00:38:48.667020 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 00:38:48 crc kubenswrapper[4858]: I0218 00:38:48.789678 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 00:38:48 crc kubenswrapper[4858]: I0218 00:38:48.804391 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 00:38:48 crc kubenswrapper[4858]: I0218 00:38:48.809325 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 00:38:48 crc kubenswrapper[4858]: I0218 00:38:48.832130 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 00:38:48 crc kubenswrapper[4858]: I0218 00:38:48.967656 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 00:38:49 crc kubenswrapper[4858]: I0218 00:38:49.064291 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 00:38:49 crc kubenswrapper[4858]: I0218 00:38:49.121100 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 00:38:49 crc kubenswrapper[4858]: I0218 00:38:49.217882 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 00:38:49 crc kubenswrapper[4858]: I0218 00:38:49.240842 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 00:38:49 crc kubenswrapper[4858]: I0218 00:38:49.337039 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 00:38:49 crc kubenswrapper[4858]: I0218 00:38:49.340175 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 00:38:49 crc kubenswrapper[4858]: I0218 00:38:49.447408 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 00:38:49 crc kubenswrapper[4858]: I0218 00:38:49.459092 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 00:38:49 crc kubenswrapper[4858]: I0218 00:38:49.523152 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 00:38:49 crc kubenswrapper[4858]: I0218 00:38:49.535868 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 00:38:49 crc kubenswrapper[4858]: I0218 00:38:49.609719 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 00:38:49 crc kubenswrapper[4858]: I0218 00:38:49.867361 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 00:38:49 crc kubenswrapper[4858]: I0218 00:38:49.958809 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 00:38:49 crc kubenswrapper[4858]: I0218 00:38:49.998470 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 00:38:50 crc kubenswrapper[4858]: I0218 00:38:50.081366 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 00:38:50 crc kubenswrapper[4858]: I0218 00:38:50.095843 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 00:38:50 crc kubenswrapper[4858]: I0218 00:38:50.122430 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 00:38:50 crc kubenswrapper[4858]: I0218 00:38:50.225347 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 00:38:50 crc kubenswrapper[4858]: I0218 00:38:50.258175 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 00:38:50 crc kubenswrapper[4858]: I0218 00:38:50.273779 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 00:38:50 crc kubenswrapper[4858]: I0218 00:38:50.273860 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 00:38:50 crc kubenswrapper[4858]: I0218 00:38:50.328340 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 00:38:50 crc kubenswrapper[4858]: I0218 00:38:50.357320 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 00:38:50 crc kubenswrapper[4858]: I0218 00:38:50.358300 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 00:38:50 crc kubenswrapper[4858]: I0218 00:38:50.449994 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 00:38:50 crc kubenswrapper[4858]: I0218 00:38:50.579145 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 00:38:50 crc kubenswrapper[4858]: I0218 00:38:50.700928 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 00:38:50 crc kubenswrapper[4858]: I0218 00:38:50.726722 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 00:38:50 crc kubenswrapper[4858]: I0218 00:38:50.895560 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 00:38:50 crc kubenswrapper[4858]: I0218 00:38:50.932406 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 00:38:50 crc kubenswrapper[4858]: I0218 00:38:50.949196 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 00:38:50 crc kubenswrapper[4858]: I0218 00:38:50.985289 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 00:38:51 crc kubenswrapper[4858]: I0218 00:38:51.022636 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 00:38:51 crc kubenswrapper[4858]: I0218 00:38:51.047780 4858 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 00:38:51 crc kubenswrapper[4858]: I0218 00:38:51.048217 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://44de843edab25d62e8a479d23debe6dac8285ea4889e6165cdd0e0f6c5901a6d" gracePeriod=5 Feb 18 00:38:51 crc kubenswrapper[4858]: I0218 00:38:51.060056 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 00:38:51 crc kubenswrapper[4858]: I0218 00:38:51.089384 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 00:38:51 crc kubenswrapper[4858]: I0218 00:38:51.212661 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 00:38:51 crc kubenswrapper[4858]: I0218 00:38:51.308697 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 00:38:51 crc kubenswrapper[4858]: I0218 00:38:51.493054 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 00:38:51 crc kubenswrapper[4858]: I0218 00:38:51.535107 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 00:38:51 crc kubenswrapper[4858]: I0218 00:38:51.558179 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 00:38:51 crc kubenswrapper[4858]: I0218 00:38:51.568337 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 00:38:51 crc kubenswrapper[4858]: I0218 00:38:51.616187 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 00:38:51 crc kubenswrapper[4858]: I0218 00:38:51.632166 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 00:38:51 crc kubenswrapper[4858]: I0218 00:38:51.951258 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 00:38:51 crc kubenswrapper[4858]: I0218 00:38:51.996766 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 00:38:52 crc kubenswrapper[4858]: I0218 00:38:52.008394 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 00:38:52 crc kubenswrapper[4858]: I0218 00:38:52.065565 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 00:38:52 crc kubenswrapper[4858]: I0218 00:38:52.098686 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 00:38:52 crc kubenswrapper[4858]: I0218 00:38:52.115603 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 00:38:52 crc kubenswrapper[4858]: I0218 00:38:52.131090 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 00:38:52 crc kubenswrapper[4858]: I0218 00:38:52.137763 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 00:38:52 crc kubenswrapper[4858]: I0218 00:38:52.238129 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 00:38:52 crc kubenswrapper[4858]: I0218 00:38:52.245343 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 00:38:52 crc kubenswrapper[4858]: I0218 00:38:52.436015 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 00:38:52 crc kubenswrapper[4858]: I0218 00:38:52.467239 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 00:38:52 crc kubenswrapper[4858]: I0218 00:38:52.614709 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 00:38:52 crc kubenswrapper[4858]: I0218 00:38:52.644452 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 00:38:52 crc kubenswrapper[4858]: I0218 00:38:52.819664 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 00:38:52 crc kubenswrapper[4858]: I0218 00:38:52.837126 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 00:38:52 crc kubenswrapper[4858]: I0218 00:38:52.866360 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 00:38:52 crc kubenswrapper[4858]: I0218 00:38:52.966578 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 00:38:53 crc kubenswrapper[4858]: I0218 00:38:53.059735 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 00:38:53 crc kubenswrapper[4858]: I0218 00:38:53.209862 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 00:38:53 crc kubenswrapper[4858]: I0218 00:38:53.318846 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 00:38:53 crc kubenswrapper[4858]: I0218 00:38:53.770806 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 00:38:53 crc kubenswrapper[4858]: I0218 00:38:53.943858 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 00:38:53 crc kubenswrapper[4858]: I0218 00:38:53.966622 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 00:38:53 crc kubenswrapper[4858]: I0218 00:38:53.997918 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 00:38:54 crc kubenswrapper[4858]: I0218 00:38:54.473010 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 00:38:55 crc kubenswrapper[4858]: I0218 00:38:55.118044 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 00:38:55 crc kubenswrapper[4858]: I0218 00:38:55.740934 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cdx8z"] Feb 18 00:38:55 crc kubenswrapper[4858]: I0218 00:38:55.741230 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cdx8z" podUID="10004d92-1526-4fef-a0c1-dbd5077a46a0" containerName="registry-server" containerID="cri-o://2758533a0a6de1a33a9239d60ae23e3411248f760a5f881ca855041c8d81f1f9" gracePeriod=30 Feb 18 00:38:55 crc kubenswrapper[4858]: I0218 00:38:55.760371 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hvp9z"] Feb 18 00:38:55 crc kubenswrapper[4858]: I0218 00:38:55.760675 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hvp9z" podUID="4f2604ed-931f-4fc1-96dc-cace175d2905" containerName="registry-server" containerID="cri-o://a7d533ba59d063dbe87c0745ca91b9b14451019f6f6c9fa89d3493d227517992" gracePeriod=30 Feb 18 00:38:55 crc kubenswrapper[4858]: I0218 00:38:55.775522 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mfftk"] Feb 18 00:38:55 crc kubenswrapper[4858]: I0218 00:38:55.775844 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-mfftk" podUID="693a6651-227a-4a62-85df-4a7e667c3daf" containerName="marketplace-operator" containerID="cri-o://4d539ecc6f518ce60a75b7e8e902f38b5e861d6f94bb6de6aa4a318e41d7343f" gracePeriod=30 Feb 18 00:38:55 crc kubenswrapper[4858]: I0218 00:38:55.789021 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sx858"] Feb 18 00:38:55 crc kubenswrapper[4858]: I0218 00:38:55.789365 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sx858" podUID="b398b3cc-afb3-4dad-bcf7-f9b2c9278be6" containerName="registry-server" containerID="cri-o://e74971339ff1eb44181b19e66f8027bd70a12b4c60f3c7c68cd8a27f1a3ee7b5" gracePeriod=30 Feb 18 00:38:55 crc kubenswrapper[4858]: I0218 00:38:55.798341 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qvmxz"] Feb 18 00:38:55 crc kubenswrapper[4858]: I0218 00:38:55.798685 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qvmxz" podUID="41a8b51a-55c4-476a-a895-6913c143f33a" containerName="registry-server" containerID="cri-o://ae880ef5779bdae6cbb7e0f9ff7667bc750a11e9344e74259a443bb41775d5a5" gracePeriod=30 Feb 18 00:38:55 crc kubenswrapper[4858]: I0218 00:38:55.810352 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rlwbh"] Feb 18 00:38:55 crc kubenswrapper[4858]: E0218 00:38:55.810645 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd2955ba-18ae-4daf-bf81-94f5b94b8243" containerName="installer" Feb 18 00:38:55 crc kubenswrapper[4858]: I0218 00:38:55.810661 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd2955ba-18ae-4daf-bf81-94f5b94b8243" containerName="installer" Feb 18 00:38:55 crc kubenswrapper[4858]: E0218 00:38:55.810673 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 00:38:55 crc kubenswrapper[4858]: I0218 00:38:55.810683 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 00:38:55 crc kubenswrapper[4858]: I0218 00:38:55.810866 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 00:38:55 crc kubenswrapper[4858]: I0218 00:38:55.810892 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd2955ba-18ae-4daf-bf81-94f5b94b8243" containerName="installer" Feb 18 00:38:55 crc kubenswrapper[4858]: I0218 00:38:55.811359 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rlwbh" Feb 18 00:38:55 crc kubenswrapper[4858]: I0218 00:38:55.823588 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rlwbh"] Feb 18 00:38:55 crc kubenswrapper[4858]: I0218 00:38:55.948078 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3627ca2b-bc95-444a-a999-b9413f6e1cc0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rlwbh\" (UID: \"3627ca2b-bc95-444a-a999-b9413f6e1cc0\") " pod="openshift-marketplace/marketplace-operator-79b997595-rlwbh" Feb 18 00:38:55 crc kubenswrapper[4858]: I0218 00:38:55.948160 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5j9h\" (UniqueName: \"kubernetes.io/projected/3627ca2b-bc95-444a-a999-b9413f6e1cc0-kube-api-access-n5j9h\") pod \"marketplace-operator-79b997595-rlwbh\" (UID: \"3627ca2b-bc95-444a-a999-b9413f6e1cc0\") " pod="openshift-marketplace/marketplace-operator-79b997595-rlwbh" Feb 18 00:38:55 crc kubenswrapper[4858]: I0218 00:38:55.948194 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3627ca2b-bc95-444a-a999-b9413f6e1cc0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rlwbh\" (UID: \"3627ca2b-bc95-444a-a999-b9413f6e1cc0\") " pod="openshift-marketplace/marketplace-operator-79b997595-rlwbh" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.049937 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5j9h\" (UniqueName: \"kubernetes.io/projected/3627ca2b-bc95-444a-a999-b9413f6e1cc0-kube-api-access-n5j9h\") pod \"marketplace-operator-79b997595-rlwbh\" (UID: \"3627ca2b-bc95-444a-a999-b9413f6e1cc0\") " pod="openshift-marketplace/marketplace-operator-79b997595-rlwbh" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.050017 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3627ca2b-bc95-444a-a999-b9413f6e1cc0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rlwbh\" (UID: \"3627ca2b-bc95-444a-a999-b9413f6e1cc0\") " pod="openshift-marketplace/marketplace-operator-79b997595-rlwbh" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.050126 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3627ca2b-bc95-444a-a999-b9413f6e1cc0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rlwbh\" (UID: \"3627ca2b-bc95-444a-a999-b9413f6e1cc0\") " pod="openshift-marketplace/marketplace-operator-79b997595-rlwbh" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.052950 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3627ca2b-bc95-444a-a999-b9413f6e1cc0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rlwbh\" (UID: \"3627ca2b-bc95-444a-a999-b9413f6e1cc0\") " pod="openshift-marketplace/marketplace-operator-79b997595-rlwbh" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.058126 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3627ca2b-bc95-444a-a999-b9413f6e1cc0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rlwbh\" (UID: \"3627ca2b-bc95-444a-a999-b9413f6e1cc0\") " pod="openshift-marketplace/marketplace-operator-79b997595-rlwbh" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.066968 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5j9h\" (UniqueName: \"kubernetes.io/projected/3627ca2b-bc95-444a-a999-b9413f6e1cc0-kube-api-access-n5j9h\") pod \"marketplace-operator-79b997595-rlwbh\" (UID: \"3627ca2b-bc95-444a-a999-b9413f6e1cc0\") " pod="openshift-marketplace/marketplace-operator-79b997595-rlwbh" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.255231 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rlwbh" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.255707 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdx8z" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.262049 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.262301 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.265753 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mfftk" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.283867 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvp9z" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.286043 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvmxz" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.293886 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sx858" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.456237 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41a8b51a-55c4-476a-a895-6913c143f33a-catalog-content\") pod \"41a8b51a-55c4-476a-a895-6913c143f33a\" (UID: \"41a8b51a-55c4-476a-a895-6913c143f33a\") " Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.456300 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpzbp\" (UniqueName: \"kubernetes.io/projected/10004d92-1526-4fef-a0c1-dbd5077a46a0-kube-api-access-gpzbp\") pod \"10004d92-1526-4fef-a0c1-dbd5077a46a0\" (UID: \"10004d92-1526-4fef-a0c1-dbd5077a46a0\") " Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.456329 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41a8b51a-55c4-476a-a895-6913c143f33a-utilities\") pod \"41a8b51a-55c4-476a-a895-6913c143f33a\" (UID: \"41a8b51a-55c4-476a-a895-6913c143f33a\") " Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.456361 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.456388 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b398b3cc-afb3-4dad-bcf7-f9b2c9278be6-catalog-content\") pod \"b398b3cc-afb3-4dad-bcf7-f9b2c9278be6\" (UID: \"b398b3cc-afb3-4dad-bcf7-f9b2c9278be6\") " Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.456409 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.456430 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.456449 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b398b3cc-afb3-4dad-bcf7-f9b2c9278be6-utilities\") pod \"b398b3cc-afb3-4dad-bcf7-f9b2c9278be6\" (UID: \"b398b3cc-afb3-4dad-bcf7-f9b2c9278be6\") " Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.456504 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhj5l\" (UniqueName: \"kubernetes.io/projected/4f2604ed-931f-4fc1-96dc-cace175d2905-kube-api-access-bhj5l\") pod \"4f2604ed-931f-4fc1-96dc-cace175d2905\" (UID: \"4f2604ed-931f-4fc1-96dc-cace175d2905\") " Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.456526 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10004d92-1526-4fef-a0c1-dbd5077a46a0-utilities\") pod \"10004d92-1526-4fef-a0c1-dbd5077a46a0\" (UID: \"10004d92-1526-4fef-a0c1-dbd5077a46a0\") " Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.456545 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.456561 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.456577 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f2604ed-931f-4fc1-96dc-cace175d2905-catalog-content\") pod \"4f2604ed-931f-4fc1-96dc-cace175d2905\" (UID: \"4f2604ed-931f-4fc1-96dc-cace175d2905\") " Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.456600 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/693a6651-227a-4a62-85df-4a7e667c3daf-marketplace-operator-metrics\") pod \"693a6651-227a-4a62-85df-4a7e667c3daf\" (UID: \"693a6651-227a-4a62-85df-4a7e667c3daf\") " Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.456710 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10004d92-1526-4fef-a0c1-dbd5077a46a0-catalog-content\") pod \"10004d92-1526-4fef-a0c1-dbd5077a46a0\" (UID: \"10004d92-1526-4fef-a0c1-dbd5077a46a0\") " Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.456738 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/693a6651-227a-4a62-85df-4a7e667c3daf-marketplace-trusted-ca\") pod \"693a6651-227a-4a62-85df-4a7e667c3daf\" (UID: \"693a6651-227a-4a62-85df-4a7e667c3daf\") " Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.456756 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f2604ed-931f-4fc1-96dc-cace175d2905-utilities\") pod \"4f2604ed-931f-4fc1-96dc-cace175d2905\" (UID: \"4f2604ed-931f-4fc1-96dc-cace175d2905\") " Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.456771 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-588js\" (UniqueName: \"kubernetes.io/projected/b398b3cc-afb3-4dad-bcf7-f9b2c9278be6-kube-api-access-588js\") pod \"b398b3cc-afb3-4dad-bcf7-f9b2c9278be6\" (UID: \"b398b3cc-afb3-4dad-bcf7-f9b2c9278be6\") " Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.456798 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s79p2\" (UniqueName: \"kubernetes.io/projected/41a8b51a-55c4-476a-a895-6913c143f33a-kube-api-access-s79p2\") pod \"41a8b51a-55c4-476a-a895-6913c143f33a\" (UID: \"41a8b51a-55c4-476a-a895-6913c143f33a\") " Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.456842 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cnvd\" (UniqueName: \"kubernetes.io/projected/693a6651-227a-4a62-85df-4a7e667c3daf-kube-api-access-2cnvd\") pod \"693a6651-227a-4a62-85df-4a7e667c3daf\" (UID: \"693a6651-227a-4a62-85df-4a7e667c3daf\") " Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.458078 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.458210 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f2604ed-931f-4fc1-96dc-cace175d2905-utilities" (OuterVolumeSpecName: "utilities") pod "4f2604ed-931f-4fc1-96dc-cace175d2905" (UID: "4f2604ed-931f-4fc1-96dc-cace175d2905"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.458427 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.458469 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.458805 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/693a6651-227a-4a62-85df-4a7e667c3daf-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "693a6651-227a-4a62-85df-4a7e667c3daf" (UID: "693a6651-227a-4a62-85df-4a7e667c3daf"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.459348 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41a8b51a-55c4-476a-a895-6913c143f33a-utilities" (OuterVolumeSpecName: "utilities") pod "41a8b51a-55c4-476a-a895-6913c143f33a" (UID: "41a8b51a-55c4-476a-a895-6913c143f33a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.459386 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.459511 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10004d92-1526-4fef-a0c1-dbd5077a46a0-utilities" (OuterVolumeSpecName: "utilities") pod "10004d92-1526-4fef-a0c1-dbd5077a46a0" (UID: "10004d92-1526-4fef-a0c1-dbd5077a46a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.460399 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b398b3cc-afb3-4dad-bcf7-f9b2c9278be6-utilities" (OuterVolumeSpecName: "utilities") pod "b398b3cc-afb3-4dad-bcf7-f9b2c9278be6" (UID: "b398b3cc-afb3-4dad-bcf7-f9b2c9278be6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.461305 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b398b3cc-afb3-4dad-bcf7-f9b2c9278be6-kube-api-access-588js" (OuterVolumeSpecName: "kube-api-access-588js") pod "b398b3cc-afb3-4dad-bcf7-f9b2c9278be6" (UID: "b398b3cc-afb3-4dad-bcf7-f9b2c9278be6"). InnerVolumeSpecName "kube-api-access-588js". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.466915 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41a8b51a-55c4-476a-a895-6913c143f33a-kube-api-access-s79p2" (OuterVolumeSpecName: "kube-api-access-s79p2") pod "41a8b51a-55c4-476a-a895-6913c143f33a" (UID: "41a8b51a-55c4-476a-a895-6913c143f33a"). InnerVolumeSpecName "kube-api-access-s79p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.467013 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10004d92-1526-4fef-a0c1-dbd5077a46a0-kube-api-access-gpzbp" (OuterVolumeSpecName: "kube-api-access-gpzbp") pod "10004d92-1526-4fef-a0c1-dbd5077a46a0" (UID: "10004d92-1526-4fef-a0c1-dbd5077a46a0"). InnerVolumeSpecName "kube-api-access-gpzbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.467097 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f2604ed-931f-4fc1-96dc-cace175d2905-kube-api-access-bhj5l" (OuterVolumeSpecName: "kube-api-access-bhj5l") pod "4f2604ed-931f-4fc1-96dc-cace175d2905" (UID: "4f2604ed-931f-4fc1-96dc-cace175d2905"). InnerVolumeSpecName "kube-api-access-bhj5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.468231 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/693a6651-227a-4a62-85df-4a7e667c3daf-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "693a6651-227a-4a62-85df-4a7e667c3daf" (UID: "693a6651-227a-4a62-85df-4a7e667c3daf"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.469661 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/693a6651-227a-4a62-85df-4a7e667c3daf-kube-api-access-2cnvd" (OuterVolumeSpecName: "kube-api-access-2cnvd") pod "693a6651-227a-4a62-85df-4a7e667c3daf" (UID: "693a6651-227a-4a62-85df-4a7e667c3daf"). InnerVolumeSpecName "kube-api-access-2cnvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.471912 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.487529 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b398b3cc-afb3-4dad-bcf7-f9b2c9278be6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b398b3cc-afb3-4dad-bcf7-f9b2c9278be6" (UID: "b398b3cc-afb3-4dad-bcf7-f9b2c9278be6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.519203 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10004d92-1526-4fef-a0c1-dbd5077a46a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10004d92-1526-4fef-a0c1-dbd5077a46a0" (UID: "10004d92-1526-4fef-a0c1-dbd5077a46a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.526691 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f2604ed-931f-4fc1-96dc-cace175d2905-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f2604ed-931f-4fc1-96dc-cace175d2905" (UID: "4f2604ed-931f-4fc1-96dc-cace175d2905"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.558644 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cnvd\" (UniqueName: \"kubernetes.io/projected/693a6651-227a-4a62-85df-4a7e667c3daf-kube-api-access-2cnvd\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.558679 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpzbp\" (UniqueName: \"kubernetes.io/projected/10004d92-1526-4fef-a0c1-dbd5077a46a0-kube-api-access-gpzbp\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.558688 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41a8b51a-55c4-476a-a895-6913c143f33a-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.558697 4858 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.558706 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b398b3cc-afb3-4dad-bcf7-f9b2c9278be6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.558714 4858 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.558721 4858 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.558731 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b398b3cc-afb3-4dad-bcf7-f9b2c9278be6-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.558741 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhj5l\" (UniqueName: \"kubernetes.io/projected/4f2604ed-931f-4fc1-96dc-cace175d2905-kube-api-access-bhj5l\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.558749 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10004d92-1526-4fef-a0c1-dbd5077a46a0-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.558779 4858 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.558787 4858 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.558795 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f2604ed-931f-4fc1-96dc-cace175d2905-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.558821 4858 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/693a6651-227a-4a62-85df-4a7e667c3daf-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.558879 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10004d92-1526-4fef-a0c1-dbd5077a46a0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.559063 4858 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/693a6651-227a-4a62-85df-4a7e667c3daf-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.559075 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f2604ed-931f-4fc1-96dc-cace175d2905-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.559115 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-588js\" (UniqueName: \"kubernetes.io/projected/b398b3cc-afb3-4dad-bcf7-f9b2c9278be6-kube-api-access-588js\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.559125 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s79p2\" (UniqueName: \"kubernetes.io/projected/41a8b51a-55c4-476a-a895-6913c143f33a-kube-api-access-s79p2\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.622361 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41a8b51a-55c4-476a-a895-6913c143f33a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41a8b51a-55c4-476a-a895-6913c143f33a" (UID: "41a8b51a-55c4-476a-a895-6913c143f33a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.659821 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41a8b51a-55c4-476a-a895-6913c143f33a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.698407 4858 generic.go:334] "Generic (PLEG): container finished" podID="10004d92-1526-4fef-a0c1-dbd5077a46a0" containerID="2758533a0a6de1a33a9239d60ae23e3411248f760a5f881ca855041c8d81f1f9" exitCode=0 Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.698537 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdx8z" event={"ID":"10004d92-1526-4fef-a0c1-dbd5077a46a0","Type":"ContainerDied","Data":"2758533a0a6de1a33a9239d60ae23e3411248f760a5f881ca855041c8d81f1f9"} Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.698545 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdx8z" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.698570 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdx8z" event={"ID":"10004d92-1526-4fef-a0c1-dbd5077a46a0","Type":"ContainerDied","Data":"22ed3db9108330673ac7afe94858a22a800cc2d9df1fa20715639763f38df855"} Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.698590 4858 scope.go:117] "RemoveContainer" containerID="2758533a0a6de1a33a9239d60ae23e3411248f760a5f881ca855041c8d81f1f9" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.701814 4858 generic.go:334] "Generic (PLEG): container finished" podID="41a8b51a-55c4-476a-a895-6913c143f33a" containerID="ae880ef5779bdae6cbb7e0f9ff7667bc750a11e9344e74259a443bb41775d5a5" exitCode=0 Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.701966 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvmxz" event={"ID":"41a8b51a-55c4-476a-a895-6913c143f33a","Type":"ContainerDied","Data":"ae880ef5779bdae6cbb7e0f9ff7667bc750a11e9344e74259a443bb41775d5a5"} Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.702058 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvmxz" event={"ID":"41a8b51a-55c4-476a-a895-6913c143f33a","Type":"ContainerDied","Data":"cf74b8ec99d09618c69773582cd79c7a55aba9e572b20af021a5d67a627ec9bf"} Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.703175 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvmxz" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.709746 4858 generic.go:334] "Generic (PLEG): container finished" podID="4f2604ed-931f-4fc1-96dc-cace175d2905" containerID="a7d533ba59d063dbe87c0745ca91b9b14451019f6f6c9fa89d3493d227517992" exitCode=0 Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.709801 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvp9z" event={"ID":"4f2604ed-931f-4fc1-96dc-cace175d2905","Type":"ContainerDied","Data":"a7d533ba59d063dbe87c0745ca91b9b14451019f6f6c9fa89d3493d227517992"} Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.709828 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvp9z" event={"ID":"4f2604ed-931f-4fc1-96dc-cace175d2905","Type":"ContainerDied","Data":"e462976b0b93a35d2e3928d4a4ad1e8a5d5c1681945550c31642266ae66d15c8"} Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.709897 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvp9z" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.714888 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rlwbh"] Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.718866 4858 generic.go:334] "Generic (PLEG): container finished" podID="b398b3cc-afb3-4dad-bcf7-f9b2c9278be6" containerID="e74971339ff1eb44181b19e66f8027bd70a12b4c60f3c7c68cd8a27f1a3ee7b5" exitCode=0 Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.718926 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sx858" event={"ID":"b398b3cc-afb3-4dad-bcf7-f9b2c9278be6","Type":"ContainerDied","Data":"e74971339ff1eb44181b19e66f8027bd70a12b4c60f3c7c68cd8a27f1a3ee7b5"} Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.718954 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sx858" event={"ID":"b398b3cc-afb3-4dad-bcf7-f9b2c9278be6","Type":"ContainerDied","Data":"f6bc62812a1d8d3bd87963011693582adaddcca4e2dcccc9645d1dcdf3fb4ea5"} Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.719034 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sx858" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.728987 4858 generic.go:334] "Generic (PLEG): container finished" podID="693a6651-227a-4a62-85df-4a7e667c3daf" containerID="4d539ecc6f518ce60a75b7e8e902f38b5e861d6f94bb6de6aa4a318e41d7343f" exitCode=0 Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.729163 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mfftk" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.729184 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mfftk" event={"ID":"693a6651-227a-4a62-85df-4a7e667c3daf","Type":"ContainerDied","Data":"4d539ecc6f518ce60a75b7e8e902f38b5e861d6f94bb6de6aa4a318e41d7343f"} Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.729563 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mfftk" event={"ID":"693a6651-227a-4a62-85df-4a7e667c3daf","Type":"ContainerDied","Data":"69250dbe0111e7b2ad581e8c3de9f6fd90f0032a5a1e45a9a1f528cbf047eebd"} Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.731786 4858 scope.go:117] "RemoveContainer" containerID="befd3930a344f3b47d61233f6afb4b709833977d9514c9ee717956d2c9aa8c03" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.734314 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.734351 4858 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="44de843edab25d62e8a479d23debe6dac8285ea4889e6165cdd0e0f6c5901a6d" exitCode=137 Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.734422 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.757694 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cdx8z"] Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.767320 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cdx8z"] Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.776672 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hvp9z"] Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.791365 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hvp9z"] Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.792083 4858 scope.go:117] "RemoveContainer" containerID="562c4b478755425a5e41a92550d5d9a2e09ada6df63d63243539583de8c504b4" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.800719 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sx858"] Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.807664 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sx858"] Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.811888 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qvmxz"] Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.816764 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qvmxz"] Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.819618 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mfftk"] Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.819758 4858 scope.go:117] "RemoveContainer" containerID="2758533a0a6de1a33a9239d60ae23e3411248f760a5f881ca855041c8d81f1f9" Feb 18 00:38:56 crc kubenswrapper[4858]: E0218 00:38:56.820577 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2758533a0a6de1a33a9239d60ae23e3411248f760a5f881ca855041c8d81f1f9\": container with ID starting with 2758533a0a6de1a33a9239d60ae23e3411248f760a5f881ca855041c8d81f1f9 not found: ID does not exist" containerID="2758533a0a6de1a33a9239d60ae23e3411248f760a5f881ca855041c8d81f1f9" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.820626 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2758533a0a6de1a33a9239d60ae23e3411248f760a5f881ca855041c8d81f1f9"} err="failed to get container status \"2758533a0a6de1a33a9239d60ae23e3411248f760a5f881ca855041c8d81f1f9\": rpc error: code = NotFound desc = could not find container \"2758533a0a6de1a33a9239d60ae23e3411248f760a5f881ca855041c8d81f1f9\": container with ID starting with 2758533a0a6de1a33a9239d60ae23e3411248f760a5f881ca855041c8d81f1f9 not found: ID does not exist" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.820661 4858 scope.go:117] "RemoveContainer" containerID="befd3930a344f3b47d61233f6afb4b709833977d9514c9ee717956d2c9aa8c03" Feb 18 00:38:56 crc kubenswrapper[4858]: E0218 00:38:56.821100 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"befd3930a344f3b47d61233f6afb4b709833977d9514c9ee717956d2c9aa8c03\": container with ID starting with befd3930a344f3b47d61233f6afb4b709833977d9514c9ee717956d2c9aa8c03 not found: ID does not exist" containerID="befd3930a344f3b47d61233f6afb4b709833977d9514c9ee717956d2c9aa8c03" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.821158 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"befd3930a344f3b47d61233f6afb4b709833977d9514c9ee717956d2c9aa8c03"} err="failed to get container status \"befd3930a344f3b47d61233f6afb4b709833977d9514c9ee717956d2c9aa8c03\": rpc error: code = NotFound desc = could not find container \"befd3930a344f3b47d61233f6afb4b709833977d9514c9ee717956d2c9aa8c03\": container with ID starting with befd3930a344f3b47d61233f6afb4b709833977d9514c9ee717956d2c9aa8c03 not found: ID does not exist" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.821202 4858 scope.go:117] "RemoveContainer" containerID="562c4b478755425a5e41a92550d5d9a2e09ada6df63d63243539583de8c504b4" Feb 18 00:38:56 crc kubenswrapper[4858]: E0218 00:38:56.821603 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"562c4b478755425a5e41a92550d5d9a2e09ada6df63d63243539583de8c504b4\": container with ID starting with 562c4b478755425a5e41a92550d5d9a2e09ada6df63d63243539583de8c504b4 not found: ID does not exist" containerID="562c4b478755425a5e41a92550d5d9a2e09ada6df63d63243539583de8c504b4" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.821626 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"562c4b478755425a5e41a92550d5d9a2e09ada6df63d63243539583de8c504b4"} err="failed to get container status \"562c4b478755425a5e41a92550d5d9a2e09ada6df63d63243539583de8c504b4\": rpc error: code = NotFound desc = could not find container \"562c4b478755425a5e41a92550d5d9a2e09ada6df63d63243539583de8c504b4\": container with ID starting with 562c4b478755425a5e41a92550d5d9a2e09ada6df63d63243539583de8c504b4 not found: ID does not exist" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.821642 4858 scope.go:117] "RemoveContainer" containerID="ae880ef5779bdae6cbb7e0f9ff7667bc750a11e9344e74259a443bb41775d5a5" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.823386 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mfftk"] Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.848433 4858 scope.go:117] "RemoveContainer" containerID="580b01b413b3d8ace7bdcc6753163b41946d2b93c56814b932e84ffd4694d680" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.870305 4858 scope.go:117] "RemoveContainer" containerID="c3d34dcbba98eb1ec50bbf17794870fa0e88f090512a4be89898d4c6f64f7ed0" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.934059 4858 scope.go:117] "RemoveContainer" containerID="ae880ef5779bdae6cbb7e0f9ff7667bc750a11e9344e74259a443bb41775d5a5" Feb 18 00:38:56 crc kubenswrapper[4858]: E0218 00:38:56.934793 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae880ef5779bdae6cbb7e0f9ff7667bc750a11e9344e74259a443bb41775d5a5\": container with ID starting with ae880ef5779bdae6cbb7e0f9ff7667bc750a11e9344e74259a443bb41775d5a5 not found: ID does not exist" containerID="ae880ef5779bdae6cbb7e0f9ff7667bc750a11e9344e74259a443bb41775d5a5" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.934818 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae880ef5779bdae6cbb7e0f9ff7667bc750a11e9344e74259a443bb41775d5a5"} err="failed to get container status \"ae880ef5779bdae6cbb7e0f9ff7667bc750a11e9344e74259a443bb41775d5a5\": rpc error: code = NotFound desc = could not find container \"ae880ef5779bdae6cbb7e0f9ff7667bc750a11e9344e74259a443bb41775d5a5\": container with ID starting with ae880ef5779bdae6cbb7e0f9ff7667bc750a11e9344e74259a443bb41775d5a5 not found: ID does not exist" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.934842 4858 scope.go:117] "RemoveContainer" containerID="580b01b413b3d8ace7bdcc6753163b41946d2b93c56814b932e84ffd4694d680" Feb 18 00:38:56 crc kubenswrapper[4858]: E0218 00:38:56.935258 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"580b01b413b3d8ace7bdcc6753163b41946d2b93c56814b932e84ffd4694d680\": container with ID starting with 580b01b413b3d8ace7bdcc6753163b41946d2b93c56814b932e84ffd4694d680 not found: ID does not exist" containerID="580b01b413b3d8ace7bdcc6753163b41946d2b93c56814b932e84ffd4694d680" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.935321 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"580b01b413b3d8ace7bdcc6753163b41946d2b93c56814b932e84ffd4694d680"} err="failed to get container status \"580b01b413b3d8ace7bdcc6753163b41946d2b93c56814b932e84ffd4694d680\": rpc error: code = NotFound desc = could not find container \"580b01b413b3d8ace7bdcc6753163b41946d2b93c56814b932e84ffd4694d680\": container with ID starting with 580b01b413b3d8ace7bdcc6753163b41946d2b93c56814b932e84ffd4694d680 not found: ID does not exist" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.935373 4858 scope.go:117] "RemoveContainer" containerID="c3d34dcbba98eb1ec50bbf17794870fa0e88f090512a4be89898d4c6f64f7ed0" Feb 18 00:38:56 crc kubenswrapper[4858]: E0218 00:38:56.935804 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3d34dcbba98eb1ec50bbf17794870fa0e88f090512a4be89898d4c6f64f7ed0\": container with ID starting with c3d34dcbba98eb1ec50bbf17794870fa0e88f090512a4be89898d4c6f64f7ed0 not found: ID does not exist" containerID="c3d34dcbba98eb1ec50bbf17794870fa0e88f090512a4be89898d4c6f64f7ed0" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.935869 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3d34dcbba98eb1ec50bbf17794870fa0e88f090512a4be89898d4c6f64f7ed0"} err="failed to get container status \"c3d34dcbba98eb1ec50bbf17794870fa0e88f090512a4be89898d4c6f64f7ed0\": rpc error: code = NotFound desc = could not find container \"c3d34dcbba98eb1ec50bbf17794870fa0e88f090512a4be89898d4c6f64f7ed0\": container with ID starting with c3d34dcbba98eb1ec50bbf17794870fa0e88f090512a4be89898d4c6f64f7ed0 not found: ID does not exist" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.935914 4858 scope.go:117] "RemoveContainer" containerID="a7d533ba59d063dbe87c0745ca91b9b14451019f6f6c9fa89d3493d227517992" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.953326 4858 scope.go:117] "RemoveContainer" containerID="4076af75744d3e9c72caa84b802504bd7f0946c7c5058ec8bc1cc7f15429fec5" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.970413 4858 scope.go:117] "RemoveContainer" containerID="d7326bda7e3795b6d195b678aa0f791d1b506d09c48ca74deaa4475f8caf4882" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.987695 4858 scope.go:117] "RemoveContainer" containerID="a7d533ba59d063dbe87c0745ca91b9b14451019f6f6c9fa89d3493d227517992" Feb 18 00:38:56 crc kubenswrapper[4858]: E0218 00:38:56.988269 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7d533ba59d063dbe87c0745ca91b9b14451019f6f6c9fa89d3493d227517992\": container with ID starting with a7d533ba59d063dbe87c0745ca91b9b14451019f6f6c9fa89d3493d227517992 not found: ID does not exist" containerID="a7d533ba59d063dbe87c0745ca91b9b14451019f6f6c9fa89d3493d227517992" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.988335 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7d533ba59d063dbe87c0745ca91b9b14451019f6f6c9fa89d3493d227517992"} err="failed to get container status \"a7d533ba59d063dbe87c0745ca91b9b14451019f6f6c9fa89d3493d227517992\": rpc error: code = NotFound desc = could not find container \"a7d533ba59d063dbe87c0745ca91b9b14451019f6f6c9fa89d3493d227517992\": container with ID starting with a7d533ba59d063dbe87c0745ca91b9b14451019f6f6c9fa89d3493d227517992 not found: ID does not exist" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.988374 4858 scope.go:117] "RemoveContainer" containerID="4076af75744d3e9c72caa84b802504bd7f0946c7c5058ec8bc1cc7f15429fec5" Feb 18 00:38:56 crc kubenswrapper[4858]: E0218 00:38:56.989298 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4076af75744d3e9c72caa84b802504bd7f0946c7c5058ec8bc1cc7f15429fec5\": container with ID starting with 4076af75744d3e9c72caa84b802504bd7f0946c7c5058ec8bc1cc7f15429fec5 not found: ID does not exist" containerID="4076af75744d3e9c72caa84b802504bd7f0946c7c5058ec8bc1cc7f15429fec5" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.989409 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4076af75744d3e9c72caa84b802504bd7f0946c7c5058ec8bc1cc7f15429fec5"} err="failed to get container status \"4076af75744d3e9c72caa84b802504bd7f0946c7c5058ec8bc1cc7f15429fec5\": rpc error: code = NotFound desc = could not find container \"4076af75744d3e9c72caa84b802504bd7f0946c7c5058ec8bc1cc7f15429fec5\": container with ID starting with 4076af75744d3e9c72caa84b802504bd7f0946c7c5058ec8bc1cc7f15429fec5 not found: ID does not exist" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.989467 4858 scope.go:117] "RemoveContainer" containerID="d7326bda7e3795b6d195b678aa0f791d1b506d09c48ca74deaa4475f8caf4882" Feb 18 00:38:56 crc kubenswrapper[4858]: E0218 00:38:56.990782 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7326bda7e3795b6d195b678aa0f791d1b506d09c48ca74deaa4475f8caf4882\": container with ID starting with d7326bda7e3795b6d195b678aa0f791d1b506d09c48ca74deaa4475f8caf4882 not found: ID does not exist" containerID="d7326bda7e3795b6d195b678aa0f791d1b506d09c48ca74deaa4475f8caf4882" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.990866 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7326bda7e3795b6d195b678aa0f791d1b506d09c48ca74deaa4475f8caf4882"} err="failed to get container status \"d7326bda7e3795b6d195b678aa0f791d1b506d09c48ca74deaa4475f8caf4882\": rpc error: code = NotFound desc = could not find container \"d7326bda7e3795b6d195b678aa0f791d1b506d09c48ca74deaa4475f8caf4882\": container with ID starting with d7326bda7e3795b6d195b678aa0f791d1b506d09c48ca74deaa4475f8caf4882 not found: ID does not exist" Feb 18 00:38:56 crc kubenswrapper[4858]: I0218 00:38:56.991094 4858 scope.go:117] "RemoveContainer" containerID="e74971339ff1eb44181b19e66f8027bd70a12b4c60f3c7c68cd8a27f1a3ee7b5" Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.007367 4858 scope.go:117] "RemoveContainer" containerID="30c9f9aec48aa54ef707d42ef795a626d68c9fcc859f6aab2ebcdade5357297a" Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.021324 4858 scope.go:117] "RemoveContainer" containerID="343bed8231922f789d9f6b97e05101eadff3b1b6d2e999d0e0c66579c3cedb66" Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.040404 4858 scope.go:117] "RemoveContainer" containerID="e74971339ff1eb44181b19e66f8027bd70a12b4c60f3c7c68cd8a27f1a3ee7b5" Feb 18 00:38:57 crc kubenswrapper[4858]: E0218 00:38:57.040893 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e74971339ff1eb44181b19e66f8027bd70a12b4c60f3c7c68cd8a27f1a3ee7b5\": container with ID starting with e74971339ff1eb44181b19e66f8027bd70a12b4c60f3c7c68cd8a27f1a3ee7b5 not found: ID does not exist" containerID="e74971339ff1eb44181b19e66f8027bd70a12b4c60f3c7c68cd8a27f1a3ee7b5" Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.040941 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e74971339ff1eb44181b19e66f8027bd70a12b4c60f3c7c68cd8a27f1a3ee7b5"} err="failed to get container status \"e74971339ff1eb44181b19e66f8027bd70a12b4c60f3c7c68cd8a27f1a3ee7b5\": rpc error: code = NotFound desc = could not find container \"e74971339ff1eb44181b19e66f8027bd70a12b4c60f3c7c68cd8a27f1a3ee7b5\": container with ID starting with e74971339ff1eb44181b19e66f8027bd70a12b4c60f3c7c68cd8a27f1a3ee7b5 not found: ID does not exist" Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.040970 4858 scope.go:117] "RemoveContainer" containerID="30c9f9aec48aa54ef707d42ef795a626d68c9fcc859f6aab2ebcdade5357297a" Feb 18 00:38:57 crc kubenswrapper[4858]: E0218 00:38:57.041625 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30c9f9aec48aa54ef707d42ef795a626d68c9fcc859f6aab2ebcdade5357297a\": container with ID starting with 30c9f9aec48aa54ef707d42ef795a626d68c9fcc859f6aab2ebcdade5357297a not found: ID does not exist" containerID="30c9f9aec48aa54ef707d42ef795a626d68c9fcc859f6aab2ebcdade5357297a" Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.041669 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30c9f9aec48aa54ef707d42ef795a626d68c9fcc859f6aab2ebcdade5357297a"} err="failed to get container status \"30c9f9aec48aa54ef707d42ef795a626d68c9fcc859f6aab2ebcdade5357297a\": rpc error: code = NotFound desc = could not find container \"30c9f9aec48aa54ef707d42ef795a626d68c9fcc859f6aab2ebcdade5357297a\": container with ID starting with 30c9f9aec48aa54ef707d42ef795a626d68c9fcc859f6aab2ebcdade5357297a not found: ID does not exist" Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.041699 4858 scope.go:117] "RemoveContainer" containerID="343bed8231922f789d9f6b97e05101eadff3b1b6d2e999d0e0c66579c3cedb66" Feb 18 00:38:57 crc kubenswrapper[4858]: E0218 00:38:57.042053 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"343bed8231922f789d9f6b97e05101eadff3b1b6d2e999d0e0c66579c3cedb66\": container with ID starting with 343bed8231922f789d9f6b97e05101eadff3b1b6d2e999d0e0c66579c3cedb66 not found: ID does not exist" containerID="343bed8231922f789d9f6b97e05101eadff3b1b6d2e999d0e0c66579c3cedb66" Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.042084 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"343bed8231922f789d9f6b97e05101eadff3b1b6d2e999d0e0c66579c3cedb66"} err="failed to get container status \"343bed8231922f789d9f6b97e05101eadff3b1b6d2e999d0e0c66579c3cedb66\": rpc error: code = NotFound desc = could not find container \"343bed8231922f789d9f6b97e05101eadff3b1b6d2e999d0e0c66579c3cedb66\": container with ID starting with 343bed8231922f789d9f6b97e05101eadff3b1b6d2e999d0e0c66579c3cedb66 not found: ID does not exist" Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.042102 4858 scope.go:117] "RemoveContainer" containerID="4d539ecc6f518ce60a75b7e8e902f38b5e861d6f94bb6de6aa4a318e41d7343f" Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.055062 4858 scope.go:117] "RemoveContainer" containerID="4d539ecc6f518ce60a75b7e8e902f38b5e861d6f94bb6de6aa4a318e41d7343f" Feb 18 00:38:57 crc kubenswrapper[4858]: E0218 00:38:57.055681 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d539ecc6f518ce60a75b7e8e902f38b5e861d6f94bb6de6aa4a318e41d7343f\": container with ID starting with 4d539ecc6f518ce60a75b7e8e902f38b5e861d6f94bb6de6aa4a318e41d7343f not found: ID does not exist" containerID="4d539ecc6f518ce60a75b7e8e902f38b5e861d6f94bb6de6aa4a318e41d7343f" Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.055742 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d539ecc6f518ce60a75b7e8e902f38b5e861d6f94bb6de6aa4a318e41d7343f"} err="failed to get container status \"4d539ecc6f518ce60a75b7e8e902f38b5e861d6f94bb6de6aa4a318e41d7343f\": rpc error: code = NotFound desc = could not find container \"4d539ecc6f518ce60a75b7e8e902f38b5e861d6f94bb6de6aa4a318e41d7343f\": container with ID starting with 4d539ecc6f518ce60a75b7e8e902f38b5e861d6f94bb6de6aa4a318e41d7343f not found: ID does not exist" Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.055779 4858 scope.go:117] "RemoveContainer" containerID="44de843edab25d62e8a479d23debe6dac8285ea4889e6165cdd0e0f6c5901a6d" Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.068801 4858 scope.go:117] "RemoveContainer" containerID="44de843edab25d62e8a479d23debe6dac8285ea4889e6165cdd0e0f6c5901a6d" Feb 18 00:38:57 crc kubenswrapper[4858]: E0218 00:38:57.069372 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44de843edab25d62e8a479d23debe6dac8285ea4889e6165cdd0e0f6c5901a6d\": container with ID starting with 44de843edab25d62e8a479d23debe6dac8285ea4889e6165cdd0e0f6c5901a6d not found: ID does not exist" containerID="44de843edab25d62e8a479d23debe6dac8285ea4889e6165cdd0e0f6c5901a6d" Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.069410 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44de843edab25d62e8a479d23debe6dac8285ea4889e6165cdd0e0f6c5901a6d"} err="failed to get container status \"44de843edab25d62e8a479d23debe6dac8285ea4889e6165cdd0e0f6c5901a6d\": rpc error: code = NotFound desc = could not find container \"44de843edab25d62e8a479d23debe6dac8285ea4889e6165cdd0e0f6c5901a6d\": container with ID starting with 44de843edab25d62e8a479d23debe6dac8285ea4889e6165cdd0e0f6c5901a6d not found: ID does not exist" Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.427793 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10004d92-1526-4fef-a0c1-dbd5077a46a0" path="/var/lib/kubelet/pods/10004d92-1526-4fef-a0c1-dbd5077a46a0/volumes" Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.428725 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41a8b51a-55c4-476a-a895-6913c143f33a" path="/var/lib/kubelet/pods/41a8b51a-55c4-476a-a895-6913c143f33a/volumes" Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.429426 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f2604ed-931f-4fc1-96dc-cace175d2905" path="/var/lib/kubelet/pods/4f2604ed-931f-4fc1-96dc-cace175d2905/volumes" Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.430688 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="693a6651-227a-4a62-85df-4a7e667c3daf" path="/var/lib/kubelet/pods/693a6651-227a-4a62-85df-4a7e667c3daf/volumes" Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.431166 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b398b3cc-afb3-4dad-bcf7-f9b2c9278be6" path="/var/lib/kubelet/pods/b398b3cc-afb3-4dad-bcf7-f9b2c9278be6/volumes" Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.432038 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.432328 4858 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.440181 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.440214 4858 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2462edfd-4752-42e4-ba6b-3a521a6d8906" Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.443111 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.443130 4858 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2462edfd-4752-42e4-ba6b-3a521a6d8906" Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.744780 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rlwbh" event={"ID":"3627ca2b-bc95-444a-a999-b9413f6e1cc0","Type":"ContainerStarted","Data":"aa6ff8fca320408590feef72fbec190e9cc570220f3de7b090b11b1ca2cef617"} Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.744853 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rlwbh" event={"ID":"3627ca2b-bc95-444a-a999-b9413f6e1cc0","Type":"ContainerStarted","Data":"d5e12a8ad4f980bf84af2091a849a49336c3b8aac80f34e81fe5080b6ac5b0a3"} Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.745099 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-rlwbh" Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.750280 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-rlwbh" Feb 18 00:38:57 crc kubenswrapper[4858]: I0218 00:38:57.759283 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-rlwbh" podStartSLOduration=2.759257131 podStartE2EDuration="2.759257131s" podCreationTimestamp="2026-02-18 00:38:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:38:57.7592191 +0000 UTC m=+291.065055852" watchObservedRunningTime="2026-02-18 00:38:57.759257131 +0000 UTC m=+291.065093903" Feb 18 00:39:07 crc kubenswrapper[4858]: I0218 00:39:07.200087 4858 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 18 00:39:07 crc kubenswrapper[4858]: I0218 00:39:07.481068 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 00:39:12 crc kubenswrapper[4858]: I0218 00:39:12.868413 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 00:39:15 crc kubenswrapper[4858]: I0218 00:39:15.852085 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 00:39:16 crc kubenswrapper[4858]: I0218 00:39:16.726980 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 00:39:19 crc kubenswrapper[4858]: I0218 00:39:19.394981 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 00:39:20 crc kubenswrapper[4858]: I0218 00:39:20.288473 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 00:39:22 crc kubenswrapper[4858]: I0218 00:39:22.033798 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 00:39:27 crc kubenswrapper[4858]: I0218 00:39:27.652469 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 00:39:31 crc kubenswrapper[4858]: I0218 00:39:31.164431 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 00:39:32 crc kubenswrapper[4858]: I0218 00:39:32.846741 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 00:39:35 crc kubenswrapper[4858]: I0218 00:39:35.673747 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ztstf"] Feb 18 00:39:35 crc kubenswrapper[4858]: I0218 00:39:35.673994 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" podUID="5d7e94f0-dd10-424a-8a9f-e3d98854c5ba" containerName="controller-manager" containerID="cri-o://43cc232211a808de4ccadb4be357271e9ce5b36aa4013d7c83421c138e02db43" gracePeriod=30 Feb 18 00:39:35 crc kubenswrapper[4858]: I0218 00:39:35.766820 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng"] Feb 18 00:39:35 crc kubenswrapper[4858]: I0218 00:39:35.767020 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng" podUID="ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383" containerName="route-controller-manager" containerID="cri-o://2c8e523cb317142556550493c13c60f9236117987a7899458ccfe36014406aed" gracePeriod=30 Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.109237 4858 generic.go:334] "Generic (PLEG): container finished" podID="5d7e94f0-dd10-424a-8a9f-e3d98854c5ba" containerID="43cc232211a808de4ccadb4be357271e9ce5b36aa4013d7c83421c138e02db43" exitCode=0 Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.109355 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" event={"ID":"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba","Type":"ContainerDied","Data":"43cc232211a808de4ccadb4be357271e9ce5b36aa4013d7c83421c138e02db43"} Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.109663 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" event={"ID":"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba","Type":"ContainerDied","Data":"fef11b98050f06100b16e13153829a2a37c3547dcc522691d0d0924e37fbc312"} Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.109685 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fef11b98050f06100b16e13153829a2a37c3547dcc522691d0d0924e37fbc312" Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.111433 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.111713 4858 generic.go:334] "Generic (PLEG): container finished" podID="ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383" containerID="2c8e523cb317142556550493c13c60f9236117987a7899458ccfe36014406aed" exitCode=0 Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.111750 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng" event={"ID":"ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383","Type":"ContainerDied","Data":"2c8e523cb317142556550493c13c60f9236117987a7899458ccfe36014406aed"} Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.138901 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng" Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.203736 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-proxy-ca-bundles\") pod \"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba\" (UID: \"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba\") " Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.203802 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-config\") pod \"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba\" (UID: \"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba\") " Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.203832 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383-serving-cert\") pod \"ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383\" (UID: \"ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383\") " Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.203859 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-client-ca\") pod \"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba\" (UID: \"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba\") " Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.203895 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jn29m\" (UniqueName: \"kubernetes.io/projected/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-kube-api-access-jn29m\") pod \"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba\" (UID: \"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba\") " Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.203927 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383-config\") pod \"ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383\" (UID: \"ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383\") " Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.203956 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-serving-cert\") pod \"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba\" (UID: \"5d7e94f0-dd10-424a-8a9f-e3d98854c5ba\") " Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.203990 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwhxq\" (UniqueName: \"kubernetes.io/projected/ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383-kube-api-access-fwhxq\") pod \"ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383\" (UID: \"ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383\") " Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.204010 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383-client-ca\") pod \"ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383\" (UID: \"ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383\") " Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.205121 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-client-ca" (OuterVolumeSpecName: "client-ca") pod "5d7e94f0-dd10-424a-8a9f-e3d98854c5ba" (UID: "5d7e94f0-dd10-424a-8a9f-e3d98854c5ba"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.205213 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5d7e94f0-dd10-424a-8a9f-e3d98854c5ba" (UID: "5d7e94f0-dd10-424a-8a9f-e3d98854c5ba"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.205282 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-config" (OuterVolumeSpecName: "config") pod "5d7e94f0-dd10-424a-8a9f-e3d98854c5ba" (UID: "5d7e94f0-dd10-424a-8a9f-e3d98854c5ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.205327 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383-client-ca" (OuterVolumeSpecName: "client-ca") pod "ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383" (UID: "ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.205406 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383-config" (OuterVolumeSpecName: "config") pod "ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383" (UID: "ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.210085 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383-kube-api-access-fwhxq" (OuterVolumeSpecName: "kube-api-access-fwhxq") pod "ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383" (UID: "ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383"). InnerVolumeSpecName "kube-api-access-fwhxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.210099 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-kube-api-access-jn29m" (OuterVolumeSpecName: "kube-api-access-jn29m") pod "5d7e94f0-dd10-424a-8a9f-e3d98854c5ba" (UID: "5d7e94f0-dd10-424a-8a9f-e3d98854c5ba"). InnerVolumeSpecName "kube-api-access-jn29m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.210949 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383" (UID: "ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.212800 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5d7e94f0-dd10-424a-8a9f-e3d98854c5ba" (UID: "5d7e94f0-dd10-424a-8a9f-e3d98854c5ba"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.305100 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.305178 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.305191 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.305202 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.305215 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jn29m\" (UniqueName: \"kubernetes.io/projected/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-kube-api-access-jn29m\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.305229 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.305239 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.305251 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwhxq\" (UniqueName: \"kubernetes.io/projected/ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383-kube-api-access-fwhxq\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:36 crc kubenswrapper[4858]: I0218 00:39:36.305262 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.123933 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ztstf" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.123961 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.123961 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng" event={"ID":"ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383","Type":"ContainerDied","Data":"deadd615ef7cafa1e15162d1df11e07a6ccd324d07d0d0efdf50c81d7204a6b5"} Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.124041 4858 scope.go:117] "RemoveContainer" containerID="2c8e523cb317142556550493c13c60f9236117987a7899458ccfe36014406aed" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.191117 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ztstf"] Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.198483 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ztstf"] Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.207301 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng"] Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.210919 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jx5ng"] Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.431677 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d7e94f0-dd10-424a-8a9f-e3d98854c5ba" path="/var/lib/kubelet/pods/5d7e94f0-dd10-424a-8a9f-e3d98854c5ba/volumes" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.433121 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383" path="/var/lib/kubelet/pods/ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383/volumes" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.655423 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw"] Feb 18 00:39:37 crc kubenswrapper[4858]: E0218 00:39:37.655829 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a8b51a-55c4-476a-a895-6913c143f33a" containerName="registry-server" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.655855 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a8b51a-55c4-476a-a895-6913c143f33a" containerName="registry-server" Feb 18 00:39:37 crc kubenswrapper[4858]: E0218 00:39:37.655876 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="693a6651-227a-4a62-85df-4a7e667c3daf" containerName="marketplace-operator" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.655891 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="693a6651-227a-4a62-85df-4a7e667c3daf" containerName="marketplace-operator" Feb 18 00:39:37 crc kubenswrapper[4858]: E0218 00:39:37.655917 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a8b51a-55c4-476a-a895-6913c143f33a" containerName="extract-utilities" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.655933 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a8b51a-55c4-476a-a895-6913c143f33a" containerName="extract-utilities" Feb 18 00:39:37 crc kubenswrapper[4858]: E0218 00:39:37.655951 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383" containerName="route-controller-manager" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.655968 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383" containerName="route-controller-manager" Feb 18 00:39:37 crc kubenswrapper[4858]: E0218 00:39:37.655989 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10004d92-1526-4fef-a0c1-dbd5077a46a0" containerName="extract-utilities" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.656009 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="10004d92-1526-4fef-a0c1-dbd5077a46a0" containerName="extract-utilities" Feb 18 00:39:37 crc kubenswrapper[4858]: E0218 00:39:37.656032 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b398b3cc-afb3-4dad-bcf7-f9b2c9278be6" containerName="extract-utilities" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.656049 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b398b3cc-afb3-4dad-bcf7-f9b2c9278be6" containerName="extract-utilities" Feb 18 00:39:37 crc kubenswrapper[4858]: E0218 00:39:37.656071 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d7e94f0-dd10-424a-8a9f-e3d98854c5ba" containerName="controller-manager" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.656087 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d7e94f0-dd10-424a-8a9f-e3d98854c5ba" containerName="controller-manager" Feb 18 00:39:37 crc kubenswrapper[4858]: E0218 00:39:37.656117 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b398b3cc-afb3-4dad-bcf7-f9b2c9278be6" containerName="registry-server" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.656137 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b398b3cc-afb3-4dad-bcf7-f9b2c9278be6" containerName="registry-server" Feb 18 00:39:37 crc kubenswrapper[4858]: E0218 00:39:37.656157 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f2604ed-931f-4fc1-96dc-cace175d2905" containerName="extract-content" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.656172 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f2604ed-931f-4fc1-96dc-cace175d2905" containerName="extract-content" Feb 18 00:39:37 crc kubenswrapper[4858]: E0218 00:39:37.656192 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b398b3cc-afb3-4dad-bcf7-f9b2c9278be6" containerName="extract-content" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.656206 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b398b3cc-afb3-4dad-bcf7-f9b2c9278be6" containerName="extract-content" Feb 18 00:39:37 crc kubenswrapper[4858]: E0218 00:39:37.656239 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10004d92-1526-4fef-a0c1-dbd5077a46a0" containerName="registry-server" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.656255 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="10004d92-1526-4fef-a0c1-dbd5077a46a0" containerName="registry-server" Feb 18 00:39:37 crc kubenswrapper[4858]: E0218 00:39:37.656277 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f2604ed-931f-4fc1-96dc-cace175d2905" containerName="registry-server" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.656293 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f2604ed-931f-4fc1-96dc-cace175d2905" containerName="registry-server" Feb 18 00:39:37 crc kubenswrapper[4858]: E0218 00:39:37.656320 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10004d92-1526-4fef-a0c1-dbd5077a46a0" containerName="extract-content" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.656333 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="10004d92-1526-4fef-a0c1-dbd5077a46a0" containerName="extract-content" Feb 18 00:39:37 crc kubenswrapper[4858]: E0218 00:39:37.656349 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f2604ed-931f-4fc1-96dc-cace175d2905" containerName="extract-utilities" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.656361 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f2604ed-931f-4fc1-96dc-cace175d2905" containerName="extract-utilities" Feb 18 00:39:37 crc kubenswrapper[4858]: E0218 00:39:37.656383 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a8b51a-55c4-476a-a895-6913c143f33a" containerName="extract-content" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.656398 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a8b51a-55c4-476a-a895-6913c143f33a" containerName="extract-content" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.656664 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d7e94f0-dd10-424a-8a9f-e3d98854c5ba" containerName="controller-manager" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.656695 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef2be3d0-ea21-4714-9ef9-8bc8a2ca0383" containerName="route-controller-manager" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.656722 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="41a8b51a-55c4-476a-a895-6913c143f33a" containerName="registry-server" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.656740 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="693a6651-227a-4a62-85df-4a7e667c3daf" containerName="marketplace-operator" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.656763 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b398b3cc-afb3-4dad-bcf7-f9b2c9278be6" containerName="registry-server" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.656785 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f2604ed-931f-4fc1-96dc-cace175d2905" containerName="registry-server" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.656811 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="10004d92-1526-4fef-a0c1-dbd5077a46a0" containerName="registry-server" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.657369 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.661924 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.663652 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.664129 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.664461 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.664813 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.670088 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn"] Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.671377 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.673356 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.675562 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn"] Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.676205 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.676444 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.676565 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.676739 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.677459 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.677555 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.682476 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw"] Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.689398 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.825421 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cbc84e5-5359-4976-b86c-ebb817a9a7f5-config\") pod \"route-controller-manager-7d657575cd-94vjn\" (UID: \"1cbc84e5-5359-4976-b86c-ebb817a9a7f5\") " pod="openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.825471 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f1026ca-d89e-4d5d-a672-b811a5114c4c-config\") pod \"controller-manager-775cbc8bcd-bb8rw\" (UID: \"2f1026ca-d89e-4d5d-a672-b811a5114c4c\") " pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.826087 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cbc84e5-5359-4976-b86c-ebb817a9a7f5-serving-cert\") pod \"route-controller-manager-7d657575cd-94vjn\" (UID: \"1cbc84e5-5359-4976-b86c-ebb817a9a7f5\") " pod="openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.826165 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1cbc84e5-5359-4976-b86c-ebb817a9a7f5-client-ca\") pod \"route-controller-manager-7d657575cd-94vjn\" (UID: \"1cbc84e5-5359-4976-b86c-ebb817a9a7f5\") " pod="openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.826229 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f1026ca-d89e-4d5d-a672-b811a5114c4c-client-ca\") pod \"controller-manager-775cbc8bcd-bb8rw\" (UID: \"2f1026ca-d89e-4d5d-a672-b811a5114c4c\") " pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.826293 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f1026ca-d89e-4d5d-a672-b811a5114c4c-proxy-ca-bundles\") pod \"controller-manager-775cbc8bcd-bb8rw\" (UID: \"2f1026ca-d89e-4d5d-a672-b811a5114c4c\") " pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.826358 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgmc2\" (UniqueName: \"kubernetes.io/projected/2f1026ca-d89e-4d5d-a672-b811a5114c4c-kube-api-access-dgmc2\") pod \"controller-manager-775cbc8bcd-bb8rw\" (UID: \"2f1026ca-d89e-4d5d-a672-b811a5114c4c\") " pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.826480 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7krqk\" (UniqueName: \"kubernetes.io/projected/1cbc84e5-5359-4976-b86c-ebb817a9a7f5-kube-api-access-7krqk\") pod \"route-controller-manager-7d657575cd-94vjn\" (UID: \"1cbc84e5-5359-4976-b86c-ebb817a9a7f5\") " pod="openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.826589 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f1026ca-d89e-4d5d-a672-b811a5114c4c-serving-cert\") pod \"controller-manager-775cbc8bcd-bb8rw\" (UID: \"2f1026ca-d89e-4d5d-a672-b811a5114c4c\") " pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.927338 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f1026ca-d89e-4d5d-a672-b811a5114c4c-serving-cert\") pod \"controller-manager-775cbc8bcd-bb8rw\" (UID: \"2f1026ca-d89e-4d5d-a672-b811a5114c4c\") " pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.927446 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cbc84e5-5359-4976-b86c-ebb817a9a7f5-config\") pod \"route-controller-manager-7d657575cd-94vjn\" (UID: \"1cbc84e5-5359-4976-b86c-ebb817a9a7f5\") " pod="openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.927554 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f1026ca-d89e-4d5d-a672-b811a5114c4c-config\") pod \"controller-manager-775cbc8bcd-bb8rw\" (UID: \"2f1026ca-d89e-4d5d-a672-b811a5114c4c\") " pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.927642 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cbc84e5-5359-4976-b86c-ebb817a9a7f5-serving-cert\") pod \"route-controller-manager-7d657575cd-94vjn\" (UID: \"1cbc84e5-5359-4976-b86c-ebb817a9a7f5\") " pod="openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.927694 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1cbc84e5-5359-4976-b86c-ebb817a9a7f5-client-ca\") pod \"route-controller-manager-7d657575cd-94vjn\" (UID: \"1cbc84e5-5359-4976-b86c-ebb817a9a7f5\") " pod="openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.929803 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1cbc84e5-5359-4976-b86c-ebb817a9a7f5-client-ca\") pod \"route-controller-manager-7d657575cd-94vjn\" (UID: \"1cbc84e5-5359-4976-b86c-ebb817a9a7f5\") " pod="openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.929949 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f1026ca-d89e-4d5d-a672-b811a5114c4c-client-ca\") pod \"controller-manager-775cbc8bcd-bb8rw\" (UID: \"2f1026ca-d89e-4d5d-a672-b811a5114c4c\") " pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.930054 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f1026ca-d89e-4d5d-a672-b811a5114c4c-proxy-ca-bundles\") pod \"controller-manager-775cbc8bcd-bb8rw\" (UID: \"2f1026ca-d89e-4d5d-a672-b811a5114c4c\") " pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.930097 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgmc2\" (UniqueName: \"kubernetes.io/projected/2f1026ca-d89e-4d5d-a672-b811a5114c4c-kube-api-access-dgmc2\") pod \"controller-manager-775cbc8bcd-bb8rw\" (UID: \"2f1026ca-d89e-4d5d-a672-b811a5114c4c\") " pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.930145 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7krqk\" (UniqueName: \"kubernetes.io/projected/1cbc84e5-5359-4976-b86c-ebb817a9a7f5-kube-api-access-7krqk\") pod \"route-controller-manager-7d657575cd-94vjn\" (UID: \"1cbc84e5-5359-4976-b86c-ebb817a9a7f5\") " pod="openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.930794 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f1026ca-d89e-4d5d-a672-b811a5114c4c-config\") pod \"controller-manager-775cbc8bcd-bb8rw\" (UID: \"2f1026ca-d89e-4d5d-a672-b811a5114c4c\") " pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.934365 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f1026ca-d89e-4d5d-a672-b811a5114c4c-proxy-ca-bundles\") pod \"controller-manager-775cbc8bcd-bb8rw\" (UID: \"2f1026ca-d89e-4d5d-a672-b811a5114c4c\") " pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.936476 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cbc84e5-5359-4976-b86c-ebb817a9a7f5-config\") pod \"route-controller-manager-7d657575cd-94vjn\" (UID: \"1cbc84e5-5359-4976-b86c-ebb817a9a7f5\") " pod="openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.936685 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cbc84e5-5359-4976-b86c-ebb817a9a7f5-serving-cert\") pod \"route-controller-manager-7d657575cd-94vjn\" (UID: \"1cbc84e5-5359-4976-b86c-ebb817a9a7f5\") " pod="openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.936932 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f1026ca-d89e-4d5d-a672-b811a5114c4c-client-ca\") pod \"controller-manager-775cbc8bcd-bb8rw\" (UID: \"2f1026ca-d89e-4d5d-a672-b811a5114c4c\") " pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.940039 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f1026ca-d89e-4d5d-a672-b811a5114c4c-serving-cert\") pod \"controller-manager-775cbc8bcd-bb8rw\" (UID: \"2f1026ca-d89e-4d5d-a672-b811a5114c4c\") " pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.962931 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7krqk\" (UniqueName: \"kubernetes.io/projected/1cbc84e5-5359-4976-b86c-ebb817a9a7f5-kube-api-access-7krqk\") pod \"route-controller-manager-7d657575cd-94vjn\" (UID: \"1cbc84e5-5359-4976-b86c-ebb817a9a7f5\") " pod="openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn" Feb 18 00:39:37 crc kubenswrapper[4858]: I0218 00:39:37.963402 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgmc2\" (UniqueName: \"kubernetes.io/projected/2f1026ca-d89e-4d5d-a672-b811a5114c4c-kube-api-access-dgmc2\") pod \"controller-manager-775cbc8bcd-bb8rw\" (UID: \"2f1026ca-d89e-4d5d-a672-b811a5114c4c\") " pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" Feb 18 00:39:38 crc kubenswrapper[4858]: I0218 00:39:38.007100 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" Feb 18 00:39:38 crc kubenswrapper[4858]: I0218 00:39:38.018273 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn" Feb 18 00:39:38 crc kubenswrapper[4858]: I0218 00:39:38.248106 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw"] Feb 18 00:39:38 crc kubenswrapper[4858]: I0218 00:39:38.295886 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn"] Feb 18 00:39:38 crc kubenswrapper[4858]: W0218 00:39:38.299607 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cbc84e5_5359_4976_b86c_ebb817a9a7f5.slice/crio-088aee6e5a733b5b2b404ef3d01392dd8bfb392f0825a8583947291f4f8203a3 WatchSource:0}: Error finding container 088aee6e5a733b5b2b404ef3d01392dd8bfb392f0825a8583947291f4f8203a3: Status 404 returned error can't find the container with id 088aee6e5a733b5b2b404ef3d01392dd8bfb392f0825a8583947291f4f8203a3 Feb 18 00:39:39 crc kubenswrapper[4858]: I0218 00:39:39.163517 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" event={"ID":"2f1026ca-d89e-4d5d-a672-b811a5114c4c","Type":"ContainerStarted","Data":"8d6566c7a7dee9cf0430e72b4edeac8f8a61d5a287cf82b2a5e647f60e5c746f"} Feb 18 00:39:39 crc kubenswrapper[4858]: I0218 00:39:39.163865 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" event={"ID":"2f1026ca-d89e-4d5d-a672-b811a5114c4c","Type":"ContainerStarted","Data":"625087a32cb8017194abc9a697963ea2ede5b96ea811ecc6e5cc940f8cd8a873"} Feb 18 00:39:39 crc kubenswrapper[4858]: I0218 00:39:39.163891 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" Feb 18 00:39:39 crc kubenswrapper[4858]: I0218 00:39:39.166161 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn" event={"ID":"1cbc84e5-5359-4976-b86c-ebb817a9a7f5","Type":"ContainerStarted","Data":"e3e7dd12870a31b5c632ee2711fe780deb5cbe76ec319b39b7ef767234f26573"} Feb 18 00:39:39 crc kubenswrapper[4858]: I0218 00:39:39.166193 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn" event={"ID":"1cbc84e5-5359-4976-b86c-ebb817a9a7f5","Type":"ContainerStarted","Data":"088aee6e5a733b5b2b404ef3d01392dd8bfb392f0825a8583947291f4f8203a3"} Feb 18 00:39:39 crc kubenswrapper[4858]: I0218 00:39:39.166475 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn" Feb 18 00:39:39 crc kubenswrapper[4858]: I0218 00:39:39.171553 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" Feb 18 00:39:39 crc kubenswrapper[4858]: I0218 00:39:39.171919 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn" Feb 18 00:39:39 crc kubenswrapper[4858]: I0218 00:39:39.184603 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" podStartSLOduration=4.184561213 podStartE2EDuration="4.184561213s" podCreationTimestamp="2026-02-18 00:39:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:39:39.182388996 +0000 UTC m=+332.488225738" watchObservedRunningTime="2026-02-18 00:39:39.184561213 +0000 UTC m=+332.490397965" Feb 18 00:39:39 crc kubenswrapper[4858]: I0218 00:39:39.237482 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn" podStartSLOduration=4.237454896 podStartE2EDuration="4.237454896s" podCreationTimestamp="2026-02-18 00:39:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:39:39.211464221 +0000 UTC m=+332.517300963" watchObservedRunningTime="2026-02-18 00:39:39.237454896 +0000 UTC m=+332.543291638" Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.184834 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw"] Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.185518 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" podUID="2f1026ca-d89e-4d5d-a672-b811a5114c4c" containerName="controller-manager" containerID="cri-o://8d6566c7a7dee9cf0430e72b4edeac8f8a61d5a287cf82b2a5e647f60e5c746f" gracePeriod=30 Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.202814 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn"] Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.203478 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn" podUID="1cbc84e5-5359-4976-b86c-ebb817a9a7f5" containerName="route-controller-manager" containerID="cri-o://e3e7dd12870a31b5c632ee2711fe780deb5cbe76ec319b39b7ef767234f26573" gracePeriod=30 Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.708248 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn" Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.787592 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.792059 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cbc84e5-5359-4976-b86c-ebb817a9a7f5-serving-cert\") pod \"1cbc84e5-5359-4976-b86c-ebb817a9a7f5\" (UID: \"1cbc84e5-5359-4976-b86c-ebb817a9a7f5\") " Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.792126 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cbc84e5-5359-4976-b86c-ebb817a9a7f5-config\") pod \"1cbc84e5-5359-4976-b86c-ebb817a9a7f5\" (UID: \"1cbc84e5-5359-4976-b86c-ebb817a9a7f5\") " Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.792202 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7krqk\" (UniqueName: \"kubernetes.io/projected/1cbc84e5-5359-4976-b86c-ebb817a9a7f5-kube-api-access-7krqk\") pod \"1cbc84e5-5359-4976-b86c-ebb817a9a7f5\" (UID: \"1cbc84e5-5359-4976-b86c-ebb817a9a7f5\") " Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.792274 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1cbc84e5-5359-4976-b86c-ebb817a9a7f5-client-ca\") pod \"1cbc84e5-5359-4976-b86c-ebb817a9a7f5\" (UID: \"1cbc84e5-5359-4976-b86c-ebb817a9a7f5\") " Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.793482 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cbc84e5-5359-4976-b86c-ebb817a9a7f5-client-ca" (OuterVolumeSpecName: "client-ca") pod "1cbc84e5-5359-4976-b86c-ebb817a9a7f5" (UID: "1cbc84e5-5359-4976-b86c-ebb817a9a7f5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.793661 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cbc84e5-5359-4976-b86c-ebb817a9a7f5-config" (OuterVolumeSpecName: "config") pod "1cbc84e5-5359-4976-b86c-ebb817a9a7f5" (UID: "1cbc84e5-5359-4976-b86c-ebb817a9a7f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.797360 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cbc84e5-5359-4976-b86c-ebb817a9a7f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1cbc84e5-5359-4976-b86c-ebb817a9a7f5" (UID: "1cbc84e5-5359-4976-b86c-ebb817a9a7f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.798620 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cbc84e5-5359-4976-b86c-ebb817a9a7f5-kube-api-access-7krqk" (OuterVolumeSpecName: "kube-api-access-7krqk") pod "1cbc84e5-5359-4976-b86c-ebb817a9a7f5" (UID: "1cbc84e5-5359-4976-b86c-ebb817a9a7f5"). InnerVolumeSpecName "kube-api-access-7krqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.893385 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f1026ca-d89e-4d5d-a672-b811a5114c4c-proxy-ca-bundles\") pod \"2f1026ca-d89e-4d5d-a672-b811a5114c4c\" (UID: \"2f1026ca-d89e-4d5d-a672-b811a5114c4c\") " Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.893474 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f1026ca-d89e-4d5d-a672-b811a5114c4c-client-ca\") pod \"2f1026ca-d89e-4d5d-a672-b811a5114c4c\" (UID: \"2f1026ca-d89e-4d5d-a672-b811a5114c4c\") " Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.893588 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f1026ca-d89e-4d5d-a672-b811a5114c4c-config\") pod \"2f1026ca-d89e-4d5d-a672-b811a5114c4c\" (UID: \"2f1026ca-d89e-4d5d-a672-b811a5114c4c\") " Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.893649 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f1026ca-d89e-4d5d-a672-b811a5114c4c-serving-cert\") pod \"2f1026ca-d89e-4d5d-a672-b811a5114c4c\" (UID: \"2f1026ca-d89e-4d5d-a672-b811a5114c4c\") " Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.893767 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgmc2\" (UniqueName: \"kubernetes.io/projected/2f1026ca-d89e-4d5d-a672-b811a5114c4c-kube-api-access-dgmc2\") pod \"2f1026ca-d89e-4d5d-a672-b811a5114c4c\" (UID: \"2f1026ca-d89e-4d5d-a672-b811a5114c4c\") " Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.894369 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cbc84e5-5359-4976-b86c-ebb817a9a7f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.894424 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cbc84e5-5359-4976-b86c-ebb817a9a7f5-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.894454 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7krqk\" (UniqueName: \"kubernetes.io/projected/1cbc84e5-5359-4976-b86c-ebb817a9a7f5-kube-api-access-7krqk\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.894479 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1cbc84e5-5359-4976-b86c-ebb817a9a7f5-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.894608 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f1026ca-d89e-4d5d-a672-b811a5114c4c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2f1026ca-d89e-4d5d-a672-b811a5114c4c" (UID: "2f1026ca-d89e-4d5d-a672-b811a5114c4c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.894658 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f1026ca-d89e-4d5d-a672-b811a5114c4c-client-ca" (OuterVolumeSpecName: "client-ca") pod "2f1026ca-d89e-4d5d-a672-b811a5114c4c" (UID: "2f1026ca-d89e-4d5d-a672-b811a5114c4c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.894808 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f1026ca-d89e-4d5d-a672-b811a5114c4c-config" (OuterVolumeSpecName: "config") pod "2f1026ca-d89e-4d5d-a672-b811a5114c4c" (UID: "2f1026ca-d89e-4d5d-a672-b811a5114c4c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.898431 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f1026ca-d89e-4d5d-a672-b811a5114c4c-kube-api-access-dgmc2" (OuterVolumeSpecName: "kube-api-access-dgmc2") pod "2f1026ca-d89e-4d5d-a672-b811a5114c4c" (UID: "2f1026ca-d89e-4d5d-a672-b811a5114c4c"). InnerVolumeSpecName "kube-api-access-dgmc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.898481 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f1026ca-d89e-4d5d-a672-b811a5114c4c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2f1026ca-d89e-4d5d-a672-b811a5114c4c" (UID: "2f1026ca-d89e-4d5d-a672-b811a5114c4c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.995980 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgmc2\" (UniqueName: \"kubernetes.io/projected/2f1026ca-d89e-4d5d-a672-b811a5114c4c-kube-api-access-dgmc2\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.996040 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f1026ca-d89e-4d5d-a672-b811a5114c4c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.996061 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f1026ca-d89e-4d5d-a672-b811a5114c4c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.996084 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f1026ca-d89e-4d5d-a672-b811a5114c4c-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:49 crc kubenswrapper[4858]: I0218 00:39:49.996102 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f1026ca-d89e-4d5d-a672-b811a5114c4c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.241315 4858 generic.go:334] "Generic (PLEG): container finished" podID="2f1026ca-d89e-4d5d-a672-b811a5114c4c" containerID="8d6566c7a7dee9cf0430e72b4edeac8f8a61d5a287cf82b2a5e647f60e5c746f" exitCode=0 Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.241444 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.241446 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" event={"ID":"2f1026ca-d89e-4d5d-a672-b811a5114c4c","Type":"ContainerDied","Data":"8d6566c7a7dee9cf0430e72b4edeac8f8a61d5a287cf82b2a5e647f60e5c746f"} Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.241729 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw" event={"ID":"2f1026ca-d89e-4d5d-a672-b811a5114c4c","Type":"ContainerDied","Data":"625087a32cb8017194abc9a697963ea2ede5b96ea811ecc6e5cc940f8cd8a873"} Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.241835 4858 scope.go:117] "RemoveContainer" containerID="8d6566c7a7dee9cf0430e72b4edeac8f8a61d5a287cf82b2a5e647f60e5c746f" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.244529 4858 generic.go:334] "Generic (PLEG): container finished" podID="1cbc84e5-5359-4976-b86c-ebb817a9a7f5" containerID="e3e7dd12870a31b5c632ee2711fe780deb5cbe76ec319b39b7ef767234f26573" exitCode=0 Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.244630 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn" event={"ID":"1cbc84e5-5359-4976-b86c-ebb817a9a7f5","Type":"ContainerDied","Data":"e3e7dd12870a31b5c632ee2711fe780deb5cbe76ec319b39b7ef767234f26573"} Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.244723 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn" event={"ID":"1cbc84e5-5359-4976-b86c-ebb817a9a7f5","Type":"ContainerDied","Data":"088aee6e5a733b5b2b404ef3d01392dd8bfb392f0825a8583947291f4f8203a3"} Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.244763 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.266667 4858 scope.go:117] "RemoveContainer" containerID="8d6566c7a7dee9cf0430e72b4edeac8f8a61d5a287cf82b2a5e647f60e5c746f" Feb 18 00:39:50 crc kubenswrapper[4858]: E0218 00:39:50.269321 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d6566c7a7dee9cf0430e72b4edeac8f8a61d5a287cf82b2a5e647f60e5c746f\": container with ID starting with 8d6566c7a7dee9cf0430e72b4edeac8f8a61d5a287cf82b2a5e647f60e5c746f not found: ID does not exist" containerID="8d6566c7a7dee9cf0430e72b4edeac8f8a61d5a287cf82b2a5e647f60e5c746f" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.269598 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d6566c7a7dee9cf0430e72b4edeac8f8a61d5a287cf82b2a5e647f60e5c746f"} err="failed to get container status \"8d6566c7a7dee9cf0430e72b4edeac8f8a61d5a287cf82b2a5e647f60e5c746f\": rpc error: code = NotFound desc = could not find container \"8d6566c7a7dee9cf0430e72b4edeac8f8a61d5a287cf82b2a5e647f60e5c746f\": container with ID starting with 8d6566c7a7dee9cf0430e72b4edeac8f8a61d5a287cf82b2a5e647f60e5c746f not found: ID does not exist" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.269839 4858 scope.go:117] "RemoveContainer" containerID="e3e7dd12870a31b5c632ee2711fe780deb5cbe76ec319b39b7ef767234f26573" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.300174 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn"] Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.301843 4858 scope.go:117] "RemoveContainer" containerID="e3e7dd12870a31b5c632ee2711fe780deb5cbe76ec319b39b7ef767234f26573" Feb 18 00:39:50 crc kubenswrapper[4858]: E0218 00:39:50.304527 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3e7dd12870a31b5c632ee2711fe780deb5cbe76ec319b39b7ef767234f26573\": container with ID starting with e3e7dd12870a31b5c632ee2711fe780deb5cbe76ec319b39b7ef767234f26573 not found: ID does not exist" containerID="e3e7dd12870a31b5c632ee2711fe780deb5cbe76ec319b39b7ef767234f26573" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.304601 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3e7dd12870a31b5c632ee2711fe780deb5cbe76ec319b39b7ef767234f26573"} err="failed to get container status \"e3e7dd12870a31b5c632ee2711fe780deb5cbe76ec319b39b7ef767234f26573\": rpc error: code = NotFound desc = could not find container \"e3e7dd12870a31b5c632ee2711fe780deb5cbe76ec319b39b7ef767234f26573\": container with ID starting with e3e7dd12870a31b5c632ee2711fe780deb5cbe76ec319b39b7ef767234f26573 not found: ID does not exist" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.309728 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d657575cd-94vjn"] Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.315292 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw"] Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.320616 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-775cbc8bcd-bb8rw"] Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.653724 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t"] Feb 18 00:39:50 crc kubenswrapper[4858]: E0218 00:39:50.654290 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cbc84e5-5359-4976-b86c-ebb817a9a7f5" containerName="route-controller-manager" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.654427 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cbc84e5-5359-4976-b86c-ebb817a9a7f5" containerName="route-controller-manager" Feb 18 00:39:50 crc kubenswrapper[4858]: E0218 00:39:50.654611 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f1026ca-d89e-4d5d-a672-b811a5114c4c" containerName="controller-manager" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.654774 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f1026ca-d89e-4d5d-a672-b811a5114c4c" containerName="controller-manager" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.655062 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cbc84e5-5359-4976-b86c-ebb817a9a7f5" containerName="route-controller-manager" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.655218 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f1026ca-d89e-4d5d-a672-b811a5114c4c" containerName="controller-manager" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.656223 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.658871 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.661358 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.661995 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.662822 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.663870 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.665233 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.668767 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s"] Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.670174 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.678586 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.678995 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.679245 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.679372 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.679766 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.679908 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.679949 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.685728 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t"] Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.703632 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s"] Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.806835 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7de6959a-f61a-4b90-b3ef-6872e71b0787-config\") pod \"route-controller-manager-84bf578688-q8k5s\" (UID: \"7de6959a-f61a-4b90-b3ef-6872e71b0787\") " pod="openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.806978 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k7t5\" (UniqueName: \"kubernetes.io/projected/7de6959a-f61a-4b90-b3ef-6872e71b0787-kube-api-access-2k7t5\") pod \"route-controller-manager-84bf578688-q8k5s\" (UID: \"7de6959a-f61a-4b90-b3ef-6872e71b0787\") " pod="openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.807048 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31b78c38-1f04-4f90-b170-b4dabaad65c8-client-ca\") pod \"controller-manager-7c5cfd5f7b-bwx6t\" (UID: \"31b78c38-1f04-4f90-b170-b4dabaad65c8\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.807090 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7de6959a-f61a-4b90-b3ef-6872e71b0787-client-ca\") pod \"route-controller-manager-84bf578688-q8k5s\" (UID: \"7de6959a-f61a-4b90-b3ef-6872e71b0787\") " pod="openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.807385 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7de6959a-f61a-4b90-b3ef-6872e71b0787-serving-cert\") pod \"route-controller-manager-84bf578688-q8k5s\" (UID: \"7de6959a-f61a-4b90-b3ef-6872e71b0787\") " pod="openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.807484 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31b78c38-1f04-4f90-b170-b4dabaad65c8-config\") pod \"controller-manager-7c5cfd5f7b-bwx6t\" (UID: \"31b78c38-1f04-4f90-b170-b4dabaad65c8\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.807582 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31b78c38-1f04-4f90-b170-b4dabaad65c8-serving-cert\") pod \"controller-manager-7c5cfd5f7b-bwx6t\" (UID: \"31b78c38-1f04-4f90-b170-b4dabaad65c8\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.807683 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwtbp\" (UniqueName: \"kubernetes.io/projected/31b78c38-1f04-4f90-b170-b4dabaad65c8-kube-api-access-mwtbp\") pod \"controller-manager-7c5cfd5f7b-bwx6t\" (UID: \"31b78c38-1f04-4f90-b170-b4dabaad65c8\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.807730 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/31b78c38-1f04-4f90-b170-b4dabaad65c8-proxy-ca-bundles\") pod \"controller-manager-7c5cfd5f7b-bwx6t\" (UID: \"31b78c38-1f04-4f90-b170-b4dabaad65c8\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.909607 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31b78c38-1f04-4f90-b170-b4dabaad65c8-client-ca\") pod \"controller-manager-7c5cfd5f7b-bwx6t\" (UID: \"31b78c38-1f04-4f90-b170-b4dabaad65c8\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.909915 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7de6959a-f61a-4b90-b3ef-6872e71b0787-client-ca\") pod \"route-controller-manager-84bf578688-q8k5s\" (UID: \"7de6959a-f61a-4b90-b3ef-6872e71b0787\") " pod="openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.910037 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7de6959a-f61a-4b90-b3ef-6872e71b0787-serving-cert\") pod \"route-controller-manager-84bf578688-q8k5s\" (UID: \"7de6959a-f61a-4b90-b3ef-6872e71b0787\") " pod="openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.910096 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31b78c38-1f04-4f90-b170-b4dabaad65c8-config\") pod \"controller-manager-7c5cfd5f7b-bwx6t\" (UID: \"31b78c38-1f04-4f90-b170-b4dabaad65c8\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.910251 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31b78c38-1f04-4f90-b170-b4dabaad65c8-serving-cert\") pod \"controller-manager-7c5cfd5f7b-bwx6t\" (UID: \"31b78c38-1f04-4f90-b170-b4dabaad65c8\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.910319 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwtbp\" (UniqueName: \"kubernetes.io/projected/31b78c38-1f04-4f90-b170-b4dabaad65c8-kube-api-access-mwtbp\") pod \"controller-manager-7c5cfd5f7b-bwx6t\" (UID: \"31b78c38-1f04-4f90-b170-b4dabaad65c8\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.910351 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/31b78c38-1f04-4f90-b170-b4dabaad65c8-proxy-ca-bundles\") pod \"controller-manager-7c5cfd5f7b-bwx6t\" (UID: \"31b78c38-1f04-4f90-b170-b4dabaad65c8\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.910404 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7de6959a-f61a-4b90-b3ef-6872e71b0787-config\") pod \"route-controller-manager-84bf578688-q8k5s\" (UID: \"7de6959a-f61a-4b90-b3ef-6872e71b0787\") " pod="openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.910465 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k7t5\" (UniqueName: \"kubernetes.io/projected/7de6959a-f61a-4b90-b3ef-6872e71b0787-kube-api-access-2k7t5\") pod \"route-controller-manager-84bf578688-q8k5s\" (UID: \"7de6959a-f61a-4b90-b3ef-6872e71b0787\") " pod="openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.911014 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7de6959a-f61a-4b90-b3ef-6872e71b0787-client-ca\") pod \"route-controller-manager-84bf578688-q8k5s\" (UID: \"7de6959a-f61a-4b90-b3ef-6872e71b0787\") " pod="openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.911521 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31b78c38-1f04-4f90-b170-b4dabaad65c8-config\") pod \"controller-manager-7c5cfd5f7b-bwx6t\" (UID: \"31b78c38-1f04-4f90-b170-b4dabaad65c8\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.912059 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31b78c38-1f04-4f90-b170-b4dabaad65c8-client-ca\") pod \"controller-manager-7c5cfd5f7b-bwx6t\" (UID: \"31b78c38-1f04-4f90-b170-b4dabaad65c8\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.914551 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/31b78c38-1f04-4f90-b170-b4dabaad65c8-proxy-ca-bundles\") pod \"controller-manager-7c5cfd5f7b-bwx6t\" (UID: \"31b78c38-1f04-4f90-b170-b4dabaad65c8\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.914856 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7de6959a-f61a-4b90-b3ef-6872e71b0787-config\") pod \"route-controller-manager-84bf578688-q8k5s\" (UID: \"7de6959a-f61a-4b90-b3ef-6872e71b0787\") " pod="openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.917530 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7de6959a-f61a-4b90-b3ef-6872e71b0787-serving-cert\") pod \"route-controller-manager-84bf578688-q8k5s\" (UID: \"7de6959a-f61a-4b90-b3ef-6872e71b0787\") " pod="openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.922176 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31b78c38-1f04-4f90-b170-b4dabaad65c8-serving-cert\") pod \"controller-manager-7c5cfd5f7b-bwx6t\" (UID: \"31b78c38-1f04-4f90-b170-b4dabaad65c8\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.941996 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k7t5\" (UniqueName: \"kubernetes.io/projected/7de6959a-f61a-4b90-b3ef-6872e71b0787-kube-api-access-2k7t5\") pod \"route-controller-manager-84bf578688-q8k5s\" (UID: \"7de6959a-f61a-4b90-b3ef-6872e71b0787\") " pod="openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.942806 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwtbp\" (UniqueName: \"kubernetes.io/projected/31b78c38-1f04-4f90-b170-b4dabaad65c8-kube-api-access-mwtbp\") pod \"controller-manager-7c5cfd5f7b-bwx6t\" (UID: \"31b78c38-1f04-4f90-b170-b4dabaad65c8\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" Feb 18 00:39:50 crc kubenswrapper[4858]: I0218 00:39:50.996125 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" Feb 18 00:39:51 crc kubenswrapper[4858]: I0218 00:39:51.014930 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s" Feb 18 00:39:51 crc kubenswrapper[4858]: I0218 00:39:51.253387 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t"] Feb 18 00:39:51 crc kubenswrapper[4858]: W0218 00:39:51.276170 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31b78c38_1f04_4f90_b170_b4dabaad65c8.slice/crio-d2108904e55cf813d6c8c872b17e1dc77376272cee8352603fe3f873a3061edd WatchSource:0}: Error finding container d2108904e55cf813d6c8c872b17e1dc77376272cee8352603fe3f873a3061edd: Status 404 returned error can't find the container with id d2108904e55cf813d6c8c872b17e1dc77376272cee8352603fe3f873a3061edd Feb 18 00:39:51 crc kubenswrapper[4858]: I0218 00:39:51.300996 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s"] Feb 18 00:39:51 crc kubenswrapper[4858]: W0218 00:39:51.313662 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7de6959a_f61a_4b90_b3ef_6872e71b0787.slice/crio-16b5eaa6501ef7efb5354ab652b08b45de1395743d3709f8ea9b7e8833357e27 WatchSource:0}: Error finding container 16b5eaa6501ef7efb5354ab652b08b45de1395743d3709f8ea9b7e8833357e27: Status 404 returned error can't find the container with id 16b5eaa6501ef7efb5354ab652b08b45de1395743d3709f8ea9b7e8833357e27 Feb 18 00:39:51 crc kubenswrapper[4858]: I0218 00:39:51.427121 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cbc84e5-5359-4976-b86c-ebb817a9a7f5" path="/var/lib/kubelet/pods/1cbc84e5-5359-4976-b86c-ebb817a9a7f5/volumes" Feb 18 00:39:51 crc kubenswrapper[4858]: I0218 00:39:51.428809 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f1026ca-d89e-4d5d-a672-b811a5114c4c" path="/var/lib/kubelet/pods/2f1026ca-d89e-4d5d-a672-b811a5114c4c/volumes" Feb 18 00:39:52 crc kubenswrapper[4858]: I0218 00:39:52.266052 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" event={"ID":"31b78c38-1f04-4f90-b170-b4dabaad65c8","Type":"ContainerStarted","Data":"654589eb938aac78a0c6518b2af9c0b991d26e3da12dffb1885a11162f1cd35a"} Feb 18 00:39:52 crc kubenswrapper[4858]: I0218 00:39:52.266412 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" event={"ID":"31b78c38-1f04-4f90-b170-b4dabaad65c8","Type":"ContainerStarted","Data":"d2108904e55cf813d6c8c872b17e1dc77376272cee8352603fe3f873a3061edd"} Feb 18 00:39:52 crc kubenswrapper[4858]: I0218 00:39:52.266436 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" Feb 18 00:39:52 crc kubenswrapper[4858]: I0218 00:39:52.268014 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s" event={"ID":"7de6959a-f61a-4b90-b3ef-6872e71b0787","Type":"ContainerStarted","Data":"29ba6db24d53dea4a68c420f627616039261b07b9b700aa3c988efe106bcb0fb"} Feb 18 00:39:52 crc kubenswrapper[4858]: I0218 00:39:52.268071 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s" event={"ID":"7de6959a-f61a-4b90-b3ef-6872e71b0787","Type":"ContainerStarted","Data":"16b5eaa6501ef7efb5354ab652b08b45de1395743d3709f8ea9b7e8833357e27"} Feb 18 00:39:52 crc kubenswrapper[4858]: I0218 00:39:52.268260 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s" Feb 18 00:39:52 crc kubenswrapper[4858]: I0218 00:39:52.272479 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" Feb 18 00:39:52 crc kubenswrapper[4858]: I0218 00:39:52.274368 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s" Feb 18 00:39:52 crc kubenswrapper[4858]: I0218 00:39:52.283950 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" podStartSLOduration=3.283936121 podStartE2EDuration="3.283936121s" podCreationTimestamp="2026-02-18 00:39:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:39:52.282297839 +0000 UTC m=+345.588134571" watchObservedRunningTime="2026-02-18 00:39:52.283936121 +0000 UTC m=+345.589772853" Feb 18 00:39:52 crc kubenswrapper[4858]: I0218 00:39:52.315686 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s" podStartSLOduration=3.315670364 podStartE2EDuration="3.315670364s" podCreationTimestamp="2026-02-18 00:39:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:39:52.31202641 +0000 UTC m=+345.617863142" watchObservedRunningTime="2026-02-18 00:39:52.315670364 +0000 UTC m=+345.621507096" Feb 18 00:39:55 crc kubenswrapper[4858]: I0218 00:39:55.265111 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:39:55 crc kubenswrapper[4858]: I0218 00:39:55.265608 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:39:55 crc kubenswrapper[4858]: I0218 00:39:55.490303 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t"] Feb 18 00:39:55 crc kubenswrapper[4858]: I0218 00:39:55.490773 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" podUID="31b78c38-1f04-4f90-b170-b4dabaad65c8" containerName="controller-manager" containerID="cri-o://654589eb938aac78a0c6518b2af9c0b991d26e3da12dffb1885a11162f1cd35a" gracePeriod=30 Feb 18 00:39:55 crc kubenswrapper[4858]: I0218 00:39:55.522910 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s"] Feb 18 00:39:55 crc kubenswrapper[4858]: I0218 00:39:55.523418 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s" podUID="7de6959a-f61a-4b90-b3ef-6872e71b0787" containerName="route-controller-manager" containerID="cri-o://29ba6db24d53dea4a68c420f627616039261b07b9b700aa3c988efe106bcb0fb" gracePeriod=30 Feb 18 00:39:55 crc kubenswrapper[4858]: I0218 00:39:55.915873 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zmv6q"] Feb 18 00:39:55 crc kubenswrapper[4858]: I0218 00:39:55.918668 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zmv6q" Feb 18 00:39:55 crc kubenswrapper[4858]: I0218 00:39:55.932761 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 00:39:55 crc kubenswrapper[4858]: I0218 00:39:55.934241 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmv6q"] Feb 18 00:39:55 crc kubenswrapper[4858]: I0218 00:39:55.978547 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e67a988-e2c1-433a-88de-286490057c27-catalog-content\") pod \"redhat-marketplace-zmv6q\" (UID: \"9e67a988-e2c1-433a-88de-286490057c27\") " pod="openshift-marketplace/redhat-marketplace-zmv6q" Feb 18 00:39:55 crc kubenswrapper[4858]: I0218 00:39:55.978605 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e67a988-e2c1-433a-88de-286490057c27-utilities\") pod \"redhat-marketplace-zmv6q\" (UID: \"9e67a988-e2c1-433a-88de-286490057c27\") " pod="openshift-marketplace/redhat-marketplace-zmv6q" Feb 18 00:39:55 crc kubenswrapper[4858]: I0218 00:39:55.978631 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkqf6\" (UniqueName: \"kubernetes.io/projected/9e67a988-e2c1-433a-88de-286490057c27-kube-api-access-gkqf6\") pod \"redhat-marketplace-zmv6q\" (UID: \"9e67a988-e2c1-433a-88de-286490057c27\") " pod="openshift-marketplace/redhat-marketplace-zmv6q" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.079566 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e67a988-e2c1-433a-88de-286490057c27-catalog-content\") pod \"redhat-marketplace-zmv6q\" (UID: \"9e67a988-e2c1-433a-88de-286490057c27\") " pod="openshift-marketplace/redhat-marketplace-zmv6q" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.079660 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e67a988-e2c1-433a-88de-286490057c27-utilities\") pod \"redhat-marketplace-zmv6q\" (UID: \"9e67a988-e2c1-433a-88de-286490057c27\") " pod="openshift-marketplace/redhat-marketplace-zmv6q" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.079691 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkqf6\" (UniqueName: \"kubernetes.io/projected/9e67a988-e2c1-433a-88de-286490057c27-kube-api-access-gkqf6\") pod \"redhat-marketplace-zmv6q\" (UID: \"9e67a988-e2c1-433a-88de-286490057c27\") " pod="openshift-marketplace/redhat-marketplace-zmv6q" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.080182 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e67a988-e2c1-433a-88de-286490057c27-catalog-content\") pod \"redhat-marketplace-zmv6q\" (UID: \"9e67a988-e2c1-433a-88de-286490057c27\") " pod="openshift-marketplace/redhat-marketplace-zmv6q" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.080349 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e67a988-e2c1-433a-88de-286490057c27-utilities\") pod \"redhat-marketplace-zmv6q\" (UID: \"9e67a988-e2c1-433a-88de-286490057c27\") " pod="openshift-marketplace/redhat-marketplace-zmv6q" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.112610 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2j9qm"] Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.114374 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkqf6\" (UniqueName: \"kubernetes.io/projected/9e67a988-e2c1-433a-88de-286490057c27-kube-api-access-gkqf6\") pod \"redhat-marketplace-zmv6q\" (UID: \"9e67a988-e2c1-433a-88de-286490057c27\") " pod="openshift-marketplace/redhat-marketplace-zmv6q" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.116874 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2j9qm" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.118992 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.132396 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2j9qm"] Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.199767 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.218030 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.270046 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bf578688-p84d8"] Feb 18 00:39:56 crc kubenswrapper[4858]: E0218 00:39:56.270229 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31b78c38-1f04-4f90-b170-b4dabaad65c8" containerName="controller-manager" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.270240 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="31b78c38-1f04-4f90-b170-b4dabaad65c8" containerName="controller-manager" Feb 18 00:39:56 crc kubenswrapper[4858]: E0218 00:39:56.270253 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7de6959a-f61a-4b90-b3ef-6872e71b0787" containerName="route-controller-manager" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.270259 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7de6959a-f61a-4b90-b3ef-6872e71b0787" containerName="route-controller-manager" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.270356 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="31b78c38-1f04-4f90-b170-b4dabaad65c8" containerName="controller-manager" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.270371 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7de6959a-f61a-4b90-b3ef-6872e71b0787" containerName="route-controller-manager" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.270745 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84bf578688-p84d8" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.277083 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zmv6q" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.282066 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5-catalog-content\") pod \"community-operators-2j9qm\" (UID: \"8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5\") " pod="openshift-marketplace/community-operators-2j9qm" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.282140 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5-utilities\") pod \"community-operators-2j9qm\" (UID: \"8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5\") " pod="openshift-marketplace/community-operators-2j9qm" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.282158 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq89p\" (UniqueName: \"kubernetes.io/projected/8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5-kube-api-access-hq89p\") pod \"community-operators-2j9qm\" (UID: \"8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5\") " pod="openshift-marketplace/community-operators-2j9qm" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.314814 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bf578688-p84d8"] Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.315965 4858 generic.go:334] "Generic (PLEG): container finished" podID="31b78c38-1f04-4f90-b170-b4dabaad65c8" containerID="654589eb938aac78a0c6518b2af9c0b991d26e3da12dffb1885a11162f1cd35a" exitCode=0 Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.316065 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.316305 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" event={"ID":"31b78c38-1f04-4f90-b170-b4dabaad65c8","Type":"ContainerDied","Data":"654589eb938aac78a0c6518b2af9c0b991d26e3da12dffb1885a11162f1cd35a"} Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.316330 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t" event={"ID":"31b78c38-1f04-4f90-b170-b4dabaad65c8","Type":"ContainerDied","Data":"d2108904e55cf813d6c8c872b17e1dc77376272cee8352603fe3f873a3061edd"} Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.316347 4858 scope.go:117] "RemoveContainer" containerID="654589eb938aac78a0c6518b2af9c0b991d26e3da12dffb1885a11162f1cd35a" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.321030 4858 generic.go:334] "Generic (PLEG): container finished" podID="7de6959a-f61a-4b90-b3ef-6872e71b0787" containerID="29ba6db24d53dea4a68c420f627616039261b07b9b700aa3c988efe106bcb0fb" exitCode=0 Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.321065 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s" event={"ID":"7de6959a-f61a-4b90-b3ef-6872e71b0787","Type":"ContainerDied","Data":"29ba6db24d53dea4a68c420f627616039261b07b9b700aa3c988efe106bcb0fb"} Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.321084 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.321088 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s" event={"ID":"7de6959a-f61a-4b90-b3ef-6872e71b0787","Type":"ContainerDied","Data":"16b5eaa6501ef7efb5354ab652b08b45de1395743d3709f8ea9b7e8833357e27"} Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.331352 4858 scope.go:117] "RemoveContainer" containerID="654589eb938aac78a0c6518b2af9c0b991d26e3da12dffb1885a11162f1cd35a" Feb 18 00:39:56 crc kubenswrapper[4858]: E0218 00:39:56.332386 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"654589eb938aac78a0c6518b2af9c0b991d26e3da12dffb1885a11162f1cd35a\": container with ID starting with 654589eb938aac78a0c6518b2af9c0b991d26e3da12dffb1885a11162f1cd35a not found: ID does not exist" containerID="654589eb938aac78a0c6518b2af9c0b991d26e3da12dffb1885a11162f1cd35a" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.332453 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"654589eb938aac78a0c6518b2af9c0b991d26e3da12dffb1885a11162f1cd35a"} err="failed to get container status \"654589eb938aac78a0c6518b2af9c0b991d26e3da12dffb1885a11162f1cd35a\": rpc error: code = NotFound desc = could not find container \"654589eb938aac78a0c6518b2af9c0b991d26e3da12dffb1885a11162f1cd35a\": container with ID starting with 654589eb938aac78a0c6518b2af9c0b991d26e3da12dffb1885a11162f1cd35a not found: ID does not exist" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.332478 4858 scope.go:117] "RemoveContainer" containerID="29ba6db24d53dea4a68c420f627616039261b07b9b700aa3c988efe106bcb0fb" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.350702 4858 scope.go:117] "RemoveContainer" containerID="29ba6db24d53dea4a68c420f627616039261b07b9b700aa3c988efe106bcb0fb" Feb 18 00:39:56 crc kubenswrapper[4858]: E0218 00:39:56.351115 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29ba6db24d53dea4a68c420f627616039261b07b9b700aa3c988efe106bcb0fb\": container with ID starting with 29ba6db24d53dea4a68c420f627616039261b07b9b700aa3c988efe106bcb0fb not found: ID does not exist" containerID="29ba6db24d53dea4a68c420f627616039261b07b9b700aa3c988efe106bcb0fb" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.351163 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29ba6db24d53dea4a68c420f627616039261b07b9b700aa3c988efe106bcb0fb"} err="failed to get container status \"29ba6db24d53dea4a68c420f627616039261b07b9b700aa3c988efe106bcb0fb\": rpc error: code = NotFound desc = could not find container \"29ba6db24d53dea4a68c420f627616039261b07b9b700aa3c988efe106bcb0fb\": container with ID starting with 29ba6db24d53dea4a68c420f627616039261b07b9b700aa3c988efe106bcb0fb not found: ID does not exist" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.383314 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31b78c38-1f04-4f90-b170-b4dabaad65c8-serving-cert\") pod \"31b78c38-1f04-4f90-b170-b4dabaad65c8\" (UID: \"31b78c38-1f04-4f90-b170-b4dabaad65c8\") " Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.383370 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31b78c38-1f04-4f90-b170-b4dabaad65c8-client-ca\") pod \"31b78c38-1f04-4f90-b170-b4dabaad65c8\" (UID: \"31b78c38-1f04-4f90-b170-b4dabaad65c8\") " Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.383396 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k7t5\" (UniqueName: \"kubernetes.io/projected/7de6959a-f61a-4b90-b3ef-6872e71b0787-kube-api-access-2k7t5\") pod \"7de6959a-f61a-4b90-b3ef-6872e71b0787\" (UID: \"7de6959a-f61a-4b90-b3ef-6872e71b0787\") " Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.383430 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/31b78c38-1f04-4f90-b170-b4dabaad65c8-proxy-ca-bundles\") pod \"31b78c38-1f04-4f90-b170-b4dabaad65c8\" (UID: \"31b78c38-1f04-4f90-b170-b4dabaad65c8\") " Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.383484 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7de6959a-f61a-4b90-b3ef-6872e71b0787-config\") pod \"7de6959a-f61a-4b90-b3ef-6872e71b0787\" (UID: \"7de6959a-f61a-4b90-b3ef-6872e71b0787\") " Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.383522 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7de6959a-f61a-4b90-b3ef-6872e71b0787-client-ca\") pod \"7de6959a-f61a-4b90-b3ef-6872e71b0787\" (UID: \"7de6959a-f61a-4b90-b3ef-6872e71b0787\") " Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.383563 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7de6959a-f61a-4b90-b3ef-6872e71b0787-serving-cert\") pod \"7de6959a-f61a-4b90-b3ef-6872e71b0787\" (UID: \"7de6959a-f61a-4b90-b3ef-6872e71b0787\") " Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.383607 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwtbp\" (UniqueName: \"kubernetes.io/projected/31b78c38-1f04-4f90-b170-b4dabaad65c8-kube-api-access-mwtbp\") pod \"31b78c38-1f04-4f90-b170-b4dabaad65c8\" (UID: \"31b78c38-1f04-4f90-b170-b4dabaad65c8\") " Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.383640 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31b78c38-1f04-4f90-b170-b4dabaad65c8-config\") pod \"31b78c38-1f04-4f90-b170-b4dabaad65c8\" (UID: \"31b78c38-1f04-4f90-b170-b4dabaad65c8\") " Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.383793 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adee241c-02c9-4257-b45a-fbde0e9e9a06-serving-cert\") pod \"route-controller-manager-84bf578688-p84d8\" (UID: \"adee241c-02c9-4257-b45a-fbde0e9e9a06\") " pod="openshift-route-controller-manager/route-controller-manager-84bf578688-p84d8" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.383846 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5-utilities\") pod \"community-operators-2j9qm\" (UID: \"8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5\") " pod="openshift-marketplace/community-operators-2j9qm" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.383870 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq89p\" (UniqueName: \"kubernetes.io/projected/8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5-kube-api-access-hq89p\") pod \"community-operators-2j9qm\" (UID: \"8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5\") " pod="openshift-marketplace/community-operators-2j9qm" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.383900 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adee241c-02c9-4257-b45a-fbde0e9e9a06-config\") pod \"route-controller-manager-84bf578688-p84d8\" (UID: \"adee241c-02c9-4257-b45a-fbde0e9e9a06\") " pod="openshift-route-controller-manager/route-controller-manager-84bf578688-p84d8" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.383935 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adee241c-02c9-4257-b45a-fbde0e9e9a06-client-ca\") pod \"route-controller-manager-84bf578688-p84d8\" (UID: \"adee241c-02c9-4257-b45a-fbde0e9e9a06\") " pod="openshift-route-controller-manager/route-controller-manager-84bf578688-p84d8" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.383970 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnl6x\" (UniqueName: \"kubernetes.io/projected/adee241c-02c9-4257-b45a-fbde0e9e9a06-kube-api-access-bnl6x\") pod \"route-controller-manager-84bf578688-p84d8\" (UID: \"adee241c-02c9-4257-b45a-fbde0e9e9a06\") " pod="openshift-route-controller-manager/route-controller-manager-84bf578688-p84d8" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.383997 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5-catalog-content\") pod \"community-operators-2j9qm\" (UID: \"8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5\") " pod="openshift-marketplace/community-operators-2j9qm" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.384538 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5-catalog-content\") pod \"community-operators-2j9qm\" (UID: \"8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5\") " pod="openshift-marketplace/community-operators-2j9qm" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.385282 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31b78c38-1f04-4f90-b170-b4dabaad65c8-config" (OuterVolumeSpecName: "config") pod "31b78c38-1f04-4f90-b170-b4dabaad65c8" (UID: "31b78c38-1f04-4f90-b170-b4dabaad65c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.385292 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31b78c38-1f04-4f90-b170-b4dabaad65c8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "31b78c38-1f04-4f90-b170-b4dabaad65c8" (UID: "31b78c38-1f04-4f90-b170-b4dabaad65c8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.385730 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7de6959a-f61a-4b90-b3ef-6872e71b0787-client-ca" (OuterVolumeSpecName: "client-ca") pod "7de6959a-f61a-4b90-b3ef-6872e71b0787" (UID: "7de6959a-f61a-4b90-b3ef-6872e71b0787"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.386321 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5-utilities\") pod \"community-operators-2j9qm\" (UID: \"8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5\") " pod="openshift-marketplace/community-operators-2j9qm" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.386419 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7de6959a-f61a-4b90-b3ef-6872e71b0787-config" (OuterVolumeSpecName: "config") pod "7de6959a-f61a-4b90-b3ef-6872e71b0787" (UID: "7de6959a-f61a-4b90-b3ef-6872e71b0787"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.386816 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31b78c38-1f04-4f90-b170-b4dabaad65c8-client-ca" (OuterVolumeSpecName: "client-ca") pod "31b78c38-1f04-4f90-b170-b4dabaad65c8" (UID: "31b78c38-1f04-4f90-b170-b4dabaad65c8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.387945 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31b78c38-1f04-4f90-b170-b4dabaad65c8-kube-api-access-mwtbp" (OuterVolumeSpecName: "kube-api-access-mwtbp") pod "31b78c38-1f04-4f90-b170-b4dabaad65c8" (UID: "31b78c38-1f04-4f90-b170-b4dabaad65c8"). InnerVolumeSpecName "kube-api-access-mwtbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.387998 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31b78c38-1f04-4f90-b170-b4dabaad65c8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "31b78c38-1f04-4f90-b170-b4dabaad65c8" (UID: "31b78c38-1f04-4f90-b170-b4dabaad65c8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.390702 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7de6959a-f61a-4b90-b3ef-6872e71b0787-kube-api-access-2k7t5" (OuterVolumeSpecName: "kube-api-access-2k7t5") pod "7de6959a-f61a-4b90-b3ef-6872e71b0787" (UID: "7de6959a-f61a-4b90-b3ef-6872e71b0787"). InnerVolumeSpecName "kube-api-access-2k7t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.393666 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7de6959a-f61a-4b90-b3ef-6872e71b0787-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7de6959a-f61a-4b90-b3ef-6872e71b0787" (UID: "7de6959a-f61a-4b90-b3ef-6872e71b0787"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.400586 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq89p\" (UniqueName: \"kubernetes.io/projected/8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5-kube-api-access-hq89p\") pod \"community-operators-2j9qm\" (UID: \"8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5\") " pod="openshift-marketplace/community-operators-2j9qm" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.485124 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnl6x\" (UniqueName: \"kubernetes.io/projected/adee241c-02c9-4257-b45a-fbde0e9e9a06-kube-api-access-bnl6x\") pod \"route-controller-manager-84bf578688-p84d8\" (UID: \"adee241c-02c9-4257-b45a-fbde0e9e9a06\") " pod="openshift-route-controller-manager/route-controller-manager-84bf578688-p84d8" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.485192 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adee241c-02c9-4257-b45a-fbde0e9e9a06-serving-cert\") pod \"route-controller-manager-84bf578688-p84d8\" (UID: \"adee241c-02c9-4257-b45a-fbde0e9e9a06\") " pod="openshift-route-controller-manager/route-controller-manager-84bf578688-p84d8" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.485235 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adee241c-02c9-4257-b45a-fbde0e9e9a06-config\") pod \"route-controller-manager-84bf578688-p84d8\" (UID: \"adee241c-02c9-4257-b45a-fbde0e9e9a06\") " pod="openshift-route-controller-manager/route-controller-manager-84bf578688-p84d8" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.485259 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adee241c-02c9-4257-b45a-fbde0e9e9a06-client-ca\") pod \"route-controller-manager-84bf578688-p84d8\" (UID: \"adee241c-02c9-4257-b45a-fbde0e9e9a06\") " pod="openshift-route-controller-manager/route-controller-manager-84bf578688-p84d8" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.485478 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/31b78c38-1f04-4f90-b170-b4dabaad65c8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.485574 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7de6959a-f61a-4b90-b3ef-6872e71b0787-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.485588 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7de6959a-f61a-4b90-b3ef-6872e71b0787-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.485630 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7de6959a-f61a-4b90-b3ef-6872e71b0787-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.485647 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwtbp\" (UniqueName: \"kubernetes.io/projected/31b78c38-1f04-4f90-b170-b4dabaad65c8-kube-api-access-mwtbp\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.485662 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31b78c38-1f04-4f90-b170-b4dabaad65c8-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.485701 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31b78c38-1f04-4f90-b170-b4dabaad65c8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.485715 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31b78c38-1f04-4f90-b170-b4dabaad65c8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.485727 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k7t5\" (UniqueName: \"kubernetes.io/projected/7de6959a-f61a-4b90-b3ef-6872e71b0787-kube-api-access-2k7t5\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.486410 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adee241c-02c9-4257-b45a-fbde0e9e9a06-client-ca\") pod \"route-controller-manager-84bf578688-p84d8\" (UID: \"adee241c-02c9-4257-b45a-fbde0e9e9a06\") " pod="openshift-route-controller-manager/route-controller-manager-84bf578688-p84d8" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.486620 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adee241c-02c9-4257-b45a-fbde0e9e9a06-config\") pod \"route-controller-manager-84bf578688-p84d8\" (UID: \"adee241c-02c9-4257-b45a-fbde0e9e9a06\") " pod="openshift-route-controller-manager/route-controller-manager-84bf578688-p84d8" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.488508 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adee241c-02c9-4257-b45a-fbde0e9e9a06-serving-cert\") pod \"route-controller-manager-84bf578688-p84d8\" (UID: \"adee241c-02c9-4257-b45a-fbde0e9e9a06\") " pod="openshift-route-controller-manager/route-controller-manager-84bf578688-p84d8" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.496870 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2j9qm" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.500621 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnl6x\" (UniqueName: \"kubernetes.io/projected/adee241c-02c9-4257-b45a-fbde0e9e9a06-kube-api-access-bnl6x\") pod \"route-controller-manager-84bf578688-p84d8\" (UID: \"adee241c-02c9-4257-b45a-fbde0e9e9a06\") " pod="openshift-route-controller-manager/route-controller-manager-84bf578688-p84d8" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.582866 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84bf578688-p84d8" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.659368 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6cb5674b59-rwlvs"] Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.663644 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.665200 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2j9qm"] Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.676581 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.677295 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.677799 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.678833 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.680295 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cb5674b59-rwlvs"] Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.680973 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.688738 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.691251 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.695832 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t"] Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.701271 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7c5cfd5f7b-bwx6t"] Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.705636 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s"] Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.709563 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bf578688-q8k5s"] Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.719234 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmv6q"] Feb 18 00:39:56 crc kubenswrapper[4858]: W0218 00:39:56.723700 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e67a988_e2c1_433a_88de_286490057c27.slice/crio-0b3699f0e07cee165c2be23dc71c927c052c87d9f958aa1a0ab1ea5b50677ad3 WatchSource:0}: Error finding container 0b3699f0e07cee165c2be23dc71c927c052c87d9f958aa1a0ab1ea5b50677ad3: Status 404 returned error can't find the container with id 0b3699f0e07cee165c2be23dc71c927c052c87d9f958aa1a0ab1ea5b50677ad3 Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.788917 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59866999-d938-42df-8a70-7349af22ca1e-config\") pod \"controller-manager-6cb5674b59-rwlvs\" (UID: \"59866999-d938-42df-8a70-7349af22ca1e\") " pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.788971 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59866999-d938-42df-8a70-7349af22ca1e-client-ca\") pod \"controller-manager-6cb5674b59-rwlvs\" (UID: \"59866999-d938-42df-8a70-7349af22ca1e\") " pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.788992 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v245\" (UniqueName: \"kubernetes.io/projected/59866999-d938-42df-8a70-7349af22ca1e-kube-api-access-6v245\") pod \"controller-manager-6cb5674b59-rwlvs\" (UID: \"59866999-d938-42df-8a70-7349af22ca1e\") " pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.789021 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59866999-d938-42df-8a70-7349af22ca1e-proxy-ca-bundles\") pod \"controller-manager-6cb5674b59-rwlvs\" (UID: \"59866999-d938-42df-8a70-7349af22ca1e\") " pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.789049 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59866999-d938-42df-8a70-7349af22ca1e-serving-cert\") pod \"controller-manager-6cb5674b59-rwlvs\" (UID: \"59866999-d938-42df-8a70-7349af22ca1e\") " pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.891144 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59866999-d938-42df-8a70-7349af22ca1e-config\") pod \"controller-manager-6cb5674b59-rwlvs\" (UID: \"59866999-d938-42df-8a70-7349af22ca1e\") " pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.891222 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59866999-d938-42df-8a70-7349af22ca1e-client-ca\") pod \"controller-manager-6cb5674b59-rwlvs\" (UID: \"59866999-d938-42df-8a70-7349af22ca1e\") " pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.891249 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v245\" (UniqueName: \"kubernetes.io/projected/59866999-d938-42df-8a70-7349af22ca1e-kube-api-access-6v245\") pod \"controller-manager-6cb5674b59-rwlvs\" (UID: \"59866999-d938-42df-8a70-7349af22ca1e\") " pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.891288 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59866999-d938-42df-8a70-7349af22ca1e-proxy-ca-bundles\") pod \"controller-manager-6cb5674b59-rwlvs\" (UID: \"59866999-d938-42df-8a70-7349af22ca1e\") " pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.891325 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59866999-d938-42df-8a70-7349af22ca1e-serving-cert\") pod \"controller-manager-6cb5674b59-rwlvs\" (UID: \"59866999-d938-42df-8a70-7349af22ca1e\") " pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.892403 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59866999-d938-42df-8a70-7349af22ca1e-client-ca\") pod \"controller-manager-6cb5674b59-rwlvs\" (UID: \"59866999-d938-42df-8a70-7349af22ca1e\") " pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.893212 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59866999-d938-42df-8a70-7349af22ca1e-config\") pod \"controller-manager-6cb5674b59-rwlvs\" (UID: \"59866999-d938-42df-8a70-7349af22ca1e\") " pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.893722 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59866999-d938-42df-8a70-7349af22ca1e-proxy-ca-bundles\") pod \"controller-manager-6cb5674b59-rwlvs\" (UID: \"59866999-d938-42df-8a70-7349af22ca1e\") " pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.899191 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59866999-d938-42df-8a70-7349af22ca1e-serving-cert\") pod \"controller-manager-6cb5674b59-rwlvs\" (UID: \"59866999-d938-42df-8a70-7349af22ca1e\") " pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.910750 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v245\" (UniqueName: \"kubernetes.io/projected/59866999-d938-42df-8a70-7349af22ca1e-kube-api-access-6v245\") pod \"controller-manager-6cb5674b59-rwlvs\" (UID: \"59866999-d938-42df-8a70-7349af22ca1e\") " pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" Feb 18 00:39:56 crc kubenswrapper[4858]: I0218 00:39:56.981602 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bf578688-p84d8"] Feb 18 00:39:57 crc kubenswrapper[4858]: I0218 00:39:57.004876 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" Feb 18 00:39:57 crc kubenswrapper[4858]: W0218 00:39:57.023599 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadee241c_02c9_4257_b45a_fbde0e9e9a06.slice/crio-9af533c8eb2c0ce9d18d7591b61deb104c8cb56b754c0a2f220e1052779d5883 WatchSource:0}: Error finding container 9af533c8eb2c0ce9d18d7591b61deb104c8cb56b754c0a2f220e1052779d5883: Status 404 returned error can't find the container with id 9af533c8eb2c0ce9d18d7591b61deb104c8cb56b754c0a2f220e1052779d5883 Feb 18 00:39:57 crc kubenswrapper[4858]: I0218 00:39:57.317746 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cb5674b59-rwlvs"] Feb 18 00:39:57 crc kubenswrapper[4858]: I0218 00:39:57.328069 4858 generic.go:334] "Generic (PLEG): container finished" podID="8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5" containerID="38880aba737d55ce78d51a1c62217da8357048608d3bf59f71fdc7c442d2fbf3" exitCode=0 Feb 18 00:39:57 crc kubenswrapper[4858]: I0218 00:39:57.328153 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j9qm" event={"ID":"8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5","Type":"ContainerDied","Data":"38880aba737d55ce78d51a1c62217da8357048608d3bf59f71fdc7c442d2fbf3"} Feb 18 00:39:57 crc kubenswrapper[4858]: I0218 00:39:57.328184 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j9qm" event={"ID":"8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5","Type":"ContainerStarted","Data":"a124170683ffed7e1cd8aa040b590f01909c8b8255ade5b616178cf650beea57"} Feb 18 00:39:57 crc kubenswrapper[4858]: I0218 00:39:57.338862 4858 generic.go:334] "Generic (PLEG): container finished" podID="9e67a988-e2c1-433a-88de-286490057c27" containerID="1461b1251a5a9858c44994dcca5e2152857189d911e602cc68166edfab7e3554" exitCode=0 Feb 18 00:39:57 crc kubenswrapper[4858]: I0218 00:39:57.338916 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmv6q" event={"ID":"9e67a988-e2c1-433a-88de-286490057c27","Type":"ContainerDied","Data":"1461b1251a5a9858c44994dcca5e2152857189d911e602cc68166edfab7e3554"} Feb 18 00:39:57 crc kubenswrapper[4858]: I0218 00:39:57.338941 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmv6q" event={"ID":"9e67a988-e2c1-433a-88de-286490057c27","Type":"ContainerStarted","Data":"0b3699f0e07cee165c2be23dc71c927c052c87d9f958aa1a0ab1ea5b50677ad3"} Feb 18 00:39:57 crc kubenswrapper[4858]: I0218 00:39:57.344325 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84bf578688-p84d8" event={"ID":"adee241c-02c9-4257-b45a-fbde0e9e9a06","Type":"ContainerStarted","Data":"a5f3ca8600fb8285aa372f2c973c97514976a3d8a6a3bdd892e949c2be52dad2"} Feb 18 00:39:57 crc kubenswrapper[4858]: I0218 00:39:57.344920 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84bf578688-p84d8" event={"ID":"adee241c-02c9-4257-b45a-fbde0e9e9a06","Type":"ContainerStarted","Data":"9af533c8eb2c0ce9d18d7591b61deb104c8cb56b754c0a2f220e1052779d5883"} Feb 18 00:39:57 crc kubenswrapper[4858]: I0218 00:39:57.344958 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-84bf578688-p84d8" Feb 18 00:39:57 crc kubenswrapper[4858]: I0218 00:39:57.369050 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-84bf578688-p84d8" podStartSLOduration=1.369007515 podStartE2EDuration="1.369007515s" podCreationTimestamp="2026-02-18 00:39:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:39:57.368250287 +0000 UTC m=+350.674087059" watchObservedRunningTime="2026-02-18 00:39:57.369007515 +0000 UTC m=+350.674844247" Feb 18 00:39:57 crc kubenswrapper[4858]: I0218 00:39:57.428004 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31b78c38-1f04-4f90-b170-b4dabaad65c8" path="/var/lib/kubelet/pods/31b78c38-1f04-4f90-b170-b4dabaad65c8/volumes" Feb 18 00:39:57 crc kubenswrapper[4858]: I0218 00:39:57.428488 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7de6959a-f61a-4b90-b3ef-6872e71b0787" path="/var/lib/kubelet/pods/7de6959a-f61a-4b90-b3ef-6872e71b0787/volumes" Feb 18 00:39:57 crc kubenswrapper[4858]: I0218 00:39:57.575658 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-84bf578688-p84d8" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.311217 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8qskx"] Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.313302 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8qskx" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.314862 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.322776 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8qskx"] Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.389989 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" event={"ID":"59866999-d938-42df-8a70-7349af22ca1e","Type":"ContainerStarted","Data":"02831c11de18a025bbc11a40276ddd872aae422d42105c9e147dda270d41f051"} Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.390055 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" event={"ID":"59866999-d938-42df-8a70-7349af22ca1e","Type":"ContainerStarted","Data":"5b912592d7ca68ea90fac65dc6f4e57655ab36782c0459b5ecd2f2ecfc0b9f06"} Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.390083 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.394837 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.396506 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j9qm" event={"ID":"8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5","Type":"ContainerStarted","Data":"66289234d4405f807096c71eee79876a5db8505ffe34f303b1c72d53229e2d13"} Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.399192 4858 generic.go:334] "Generic (PLEG): container finished" podID="9e67a988-e2c1-433a-88de-286490057c27" containerID="7c2b0cf43b2d98baf9b633cf01d69ea267d58b9a84105410a6c8bca996c00650" exitCode=0 Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.399685 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmv6q" event={"ID":"9e67a988-e2c1-433a-88de-286490057c27","Type":"ContainerDied","Data":"7c2b0cf43b2d98baf9b633cf01d69ea267d58b9a84105410a6c8bca996c00650"} Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.412088 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b98459bf-9693-495a-ac0d-f46be8ea2df1-utilities\") pod \"redhat-operators-8qskx\" (UID: \"b98459bf-9693-495a-ac0d-f46be8ea2df1\") " pod="openshift-marketplace/redhat-operators-8qskx" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.412308 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b98459bf-9693-495a-ac0d-f46be8ea2df1-catalog-content\") pod \"redhat-operators-8qskx\" (UID: \"b98459bf-9693-495a-ac0d-f46be8ea2df1\") " pod="openshift-marketplace/redhat-operators-8qskx" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.412344 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvm8r\" (UniqueName: \"kubernetes.io/projected/b98459bf-9693-495a-ac0d-f46be8ea2df1-kube-api-access-mvm8r\") pod \"redhat-operators-8qskx\" (UID: \"b98459bf-9693-495a-ac0d-f46be8ea2df1\") " pod="openshift-marketplace/redhat-operators-8qskx" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.413579 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" podStartSLOduration=3.413560563 podStartE2EDuration="3.413560563s" podCreationTimestamp="2026-02-18 00:39:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:39:58.411759867 +0000 UTC m=+351.717596599" watchObservedRunningTime="2026-02-18 00:39:58.413560563 +0000 UTC m=+351.719397305" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.509846 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dpkwc"] Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.511303 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dpkwc" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.513066 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b98459bf-9693-495a-ac0d-f46be8ea2df1-utilities\") pod \"redhat-operators-8qskx\" (UID: \"b98459bf-9693-495a-ac0d-f46be8ea2df1\") " pod="openshift-marketplace/redhat-operators-8qskx" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.513125 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1-catalog-content\") pod \"certified-operators-dpkwc\" (UID: \"0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1\") " pod="openshift-marketplace/certified-operators-dpkwc" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.513183 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1-utilities\") pod \"certified-operators-dpkwc\" (UID: \"0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1\") " pod="openshift-marketplace/certified-operators-dpkwc" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.513211 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b98459bf-9693-495a-ac0d-f46be8ea2df1-catalog-content\") pod \"redhat-operators-8qskx\" (UID: \"b98459bf-9693-495a-ac0d-f46be8ea2df1\") " pod="openshift-marketplace/redhat-operators-8qskx" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.513231 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvm8r\" (UniqueName: \"kubernetes.io/projected/b98459bf-9693-495a-ac0d-f46be8ea2df1-kube-api-access-mvm8r\") pod \"redhat-operators-8qskx\" (UID: \"b98459bf-9693-495a-ac0d-f46be8ea2df1\") " pod="openshift-marketplace/redhat-operators-8qskx" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.513255 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh4b7\" (UniqueName: \"kubernetes.io/projected/0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1-kube-api-access-qh4b7\") pod \"certified-operators-dpkwc\" (UID: \"0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1\") " pod="openshift-marketplace/certified-operators-dpkwc" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.513346 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.513861 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b98459bf-9693-495a-ac0d-f46be8ea2df1-catalog-content\") pod \"redhat-operators-8qskx\" (UID: \"b98459bf-9693-495a-ac0d-f46be8ea2df1\") " pod="openshift-marketplace/redhat-operators-8qskx" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.514154 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b98459bf-9693-495a-ac0d-f46be8ea2df1-utilities\") pod \"redhat-operators-8qskx\" (UID: \"b98459bf-9693-495a-ac0d-f46be8ea2df1\") " pod="openshift-marketplace/redhat-operators-8qskx" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.526264 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dpkwc"] Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.540970 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvm8r\" (UniqueName: \"kubernetes.io/projected/b98459bf-9693-495a-ac0d-f46be8ea2df1-kube-api-access-mvm8r\") pod \"redhat-operators-8qskx\" (UID: \"b98459bf-9693-495a-ac0d-f46be8ea2df1\") " pod="openshift-marketplace/redhat-operators-8qskx" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.614293 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh4b7\" (UniqueName: \"kubernetes.io/projected/0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1-kube-api-access-qh4b7\") pod \"certified-operators-dpkwc\" (UID: \"0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1\") " pod="openshift-marketplace/certified-operators-dpkwc" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.614358 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1-catalog-content\") pod \"certified-operators-dpkwc\" (UID: \"0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1\") " pod="openshift-marketplace/certified-operators-dpkwc" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.614397 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1-utilities\") pod \"certified-operators-dpkwc\" (UID: \"0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1\") " pod="openshift-marketplace/certified-operators-dpkwc" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.614837 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1-utilities\") pod \"certified-operators-dpkwc\" (UID: \"0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1\") " pod="openshift-marketplace/certified-operators-dpkwc" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.614913 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1-catalog-content\") pod \"certified-operators-dpkwc\" (UID: \"0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1\") " pod="openshift-marketplace/certified-operators-dpkwc" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.629243 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8qskx" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.632859 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh4b7\" (UniqueName: \"kubernetes.io/projected/0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1-kube-api-access-qh4b7\") pod \"certified-operators-dpkwc\" (UID: \"0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1\") " pod="openshift-marketplace/certified-operators-dpkwc" Feb 18 00:39:58 crc kubenswrapper[4858]: I0218 00:39:58.837007 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dpkwc" Feb 18 00:39:59 crc kubenswrapper[4858]: I0218 00:39:59.046053 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8qskx"] Feb 18 00:39:59 crc kubenswrapper[4858]: W0218 00:39:59.056656 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb98459bf_9693_495a_ac0d_f46be8ea2df1.slice/crio-d9ed5cd68b3beadf835214d54c71f15054939e75b644d07de634c24716deb935 WatchSource:0}: Error finding container d9ed5cd68b3beadf835214d54c71f15054939e75b644d07de634c24716deb935: Status 404 returned error can't find the container with id d9ed5cd68b3beadf835214d54c71f15054939e75b644d07de634c24716deb935 Feb 18 00:39:59 crc kubenswrapper[4858]: I0218 00:39:59.242687 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dpkwc"] Feb 18 00:39:59 crc kubenswrapper[4858]: W0218 00:39:59.250052 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c53f3ee_32ae_4fb7_9dae_4dfb8b9f97b1.slice/crio-898b1be81748defcc91b07f07117b2729cd4d10dec2470530e47c70a8598bbcf WatchSource:0}: Error finding container 898b1be81748defcc91b07f07117b2729cd4d10dec2470530e47c70a8598bbcf: Status 404 returned error can't find the container with id 898b1be81748defcc91b07f07117b2729cd4d10dec2470530e47c70a8598bbcf Feb 18 00:39:59 crc kubenswrapper[4858]: I0218 00:39:59.407102 4858 generic.go:334] "Generic (PLEG): container finished" podID="b98459bf-9693-495a-ac0d-f46be8ea2df1" containerID="f9e944e335d3ad7c5eee4097d098089f41b7ad07b1fcfa32ff2f9fddb7e83277" exitCode=0 Feb 18 00:39:59 crc kubenswrapper[4858]: I0218 00:39:59.407196 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8qskx" event={"ID":"b98459bf-9693-495a-ac0d-f46be8ea2df1","Type":"ContainerDied","Data":"f9e944e335d3ad7c5eee4097d098089f41b7ad07b1fcfa32ff2f9fddb7e83277"} Feb 18 00:39:59 crc kubenswrapper[4858]: I0218 00:39:59.407234 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8qskx" event={"ID":"b98459bf-9693-495a-ac0d-f46be8ea2df1","Type":"ContainerStarted","Data":"d9ed5cd68b3beadf835214d54c71f15054939e75b644d07de634c24716deb935"} Feb 18 00:39:59 crc kubenswrapper[4858]: I0218 00:39:59.411056 4858 generic.go:334] "Generic (PLEG): container finished" podID="0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1" containerID="802bd5b500c55911ab51bad114f3d23fef23c8d606e9a029aaffdda89780747d" exitCode=0 Feb 18 00:39:59 crc kubenswrapper[4858]: I0218 00:39:59.411121 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpkwc" event={"ID":"0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1","Type":"ContainerDied","Data":"802bd5b500c55911ab51bad114f3d23fef23c8d606e9a029aaffdda89780747d"} Feb 18 00:39:59 crc kubenswrapper[4858]: I0218 00:39:59.411142 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpkwc" event={"ID":"0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1","Type":"ContainerStarted","Data":"898b1be81748defcc91b07f07117b2729cd4d10dec2470530e47c70a8598bbcf"} Feb 18 00:39:59 crc kubenswrapper[4858]: I0218 00:39:59.415334 4858 generic.go:334] "Generic (PLEG): container finished" podID="8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5" containerID="66289234d4405f807096c71eee79876a5db8505ffe34f303b1c72d53229e2d13" exitCode=0 Feb 18 00:39:59 crc kubenswrapper[4858]: I0218 00:39:59.415434 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j9qm" event={"ID":"8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5","Type":"ContainerDied","Data":"66289234d4405f807096c71eee79876a5db8505ffe34f303b1c72d53229e2d13"} Feb 18 00:39:59 crc kubenswrapper[4858]: I0218 00:39:59.415471 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j9qm" event={"ID":"8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5","Type":"ContainerStarted","Data":"2c8ecedac3f251631d4fd2d57aec2a9c2b49f3b7f8c46aa83a53922400e0cf20"} Feb 18 00:39:59 crc kubenswrapper[4858]: I0218 00:39:59.438863 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmv6q" event={"ID":"9e67a988-e2c1-433a-88de-286490057c27","Type":"ContainerStarted","Data":"4f5dd44e369597cbdcfe085ed2b3e3555b31128337160d13188c72e54661fa13"} Feb 18 00:39:59 crc kubenswrapper[4858]: I0218 00:39:59.472686 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2j9qm" podStartSLOduration=1.957283813 podStartE2EDuration="3.472666534s" podCreationTimestamp="2026-02-18 00:39:56 +0000 UTC" firstStartedPulling="2026-02-18 00:39:57.335632611 +0000 UTC m=+350.641469363" lastFinishedPulling="2026-02-18 00:39:58.851015352 +0000 UTC m=+352.156852084" observedRunningTime="2026-02-18 00:39:59.471664758 +0000 UTC m=+352.777501480" watchObservedRunningTime="2026-02-18 00:39:59.472666534 +0000 UTC m=+352.778503276" Feb 18 00:40:00 crc kubenswrapper[4858]: I0218 00:40:00.428092 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8qskx" event={"ID":"b98459bf-9693-495a-ac0d-f46be8ea2df1","Type":"ContainerStarted","Data":"f044241a904e2341c5ff09c15fe71e1e3a338dd95e0e248352d3bdf21f965673"} Feb 18 00:40:00 crc kubenswrapper[4858]: I0218 00:40:00.430359 4858 generic.go:334] "Generic (PLEG): container finished" podID="0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1" containerID="f87c4f894b07c149b5c9e7da9eead2b979c76f346e22a585f6041c66d80765fe" exitCode=0 Feb 18 00:40:00 crc kubenswrapper[4858]: I0218 00:40:00.430432 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpkwc" event={"ID":"0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1","Type":"ContainerDied","Data":"f87c4f894b07c149b5c9e7da9eead2b979c76f346e22a585f6041c66d80765fe"} Feb 18 00:40:00 crc kubenswrapper[4858]: I0218 00:40:00.454076 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zmv6q" podStartSLOduration=3.963183462 podStartE2EDuration="5.454040995s" podCreationTimestamp="2026-02-18 00:39:55 +0000 UTC" firstStartedPulling="2026-02-18 00:39:57.341189753 +0000 UTC m=+350.647026475" lastFinishedPulling="2026-02-18 00:39:58.832047266 +0000 UTC m=+352.137884008" observedRunningTime="2026-02-18 00:39:59.502648431 +0000 UTC m=+352.808485163" watchObservedRunningTime="2026-02-18 00:40:00.454040995 +0000 UTC m=+353.759877727" Feb 18 00:40:01 crc kubenswrapper[4858]: I0218 00:40:01.447662 4858 generic.go:334] "Generic (PLEG): container finished" podID="b98459bf-9693-495a-ac0d-f46be8ea2df1" containerID="f044241a904e2341c5ff09c15fe71e1e3a338dd95e0e248352d3bdf21f965673" exitCode=0 Feb 18 00:40:01 crc kubenswrapper[4858]: I0218 00:40:01.447783 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8qskx" event={"ID":"b98459bf-9693-495a-ac0d-f46be8ea2df1","Type":"ContainerDied","Data":"f044241a904e2341c5ff09c15fe71e1e3a338dd95e0e248352d3bdf21f965673"} Feb 18 00:40:01 crc kubenswrapper[4858]: I0218 00:40:01.451566 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpkwc" event={"ID":"0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1","Type":"ContainerStarted","Data":"fd7edc35a90ef14d32cfc72cb0fdebea3546cc4c267b7f85caed196da8473816"} Feb 18 00:40:01 crc kubenswrapper[4858]: I0218 00:40:01.490839 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dpkwc" podStartSLOduration=2.068377802 podStartE2EDuration="3.490813832s" podCreationTimestamp="2026-02-18 00:39:58 +0000 UTC" firstStartedPulling="2026-02-18 00:39:59.412901094 +0000 UTC m=+352.718737826" lastFinishedPulling="2026-02-18 00:40:00.835337134 +0000 UTC m=+354.141173856" observedRunningTime="2026-02-18 00:40:01.485468005 +0000 UTC m=+354.791304777" watchObservedRunningTime="2026-02-18 00:40:01.490813832 +0000 UTC m=+354.796650584" Feb 18 00:40:02 crc kubenswrapper[4858]: I0218 00:40:02.459639 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8qskx" event={"ID":"b98459bf-9693-495a-ac0d-f46be8ea2df1","Type":"ContainerStarted","Data":"d88cf5ed04e80e42b52f75839b85ccfc64db5643b20182bfed0fdf4beaa5c08a"} Feb 18 00:40:02 crc kubenswrapper[4858]: I0218 00:40:02.481456 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8qskx" podStartSLOduration=2.042299935 podStartE2EDuration="4.48143716s" podCreationTimestamp="2026-02-18 00:39:58 +0000 UTC" firstStartedPulling="2026-02-18 00:39:59.409486987 +0000 UTC m=+352.715323759" lastFinishedPulling="2026-02-18 00:40:01.848624252 +0000 UTC m=+355.154460984" observedRunningTime="2026-02-18 00:40:02.479208693 +0000 UTC m=+355.785045435" watchObservedRunningTime="2026-02-18 00:40:02.48143716 +0000 UTC m=+355.787273892" Feb 18 00:40:06 crc kubenswrapper[4858]: I0218 00:40:06.277682 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zmv6q" Feb 18 00:40:06 crc kubenswrapper[4858]: I0218 00:40:06.278118 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zmv6q" Feb 18 00:40:06 crc kubenswrapper[4858]: I0218 00:40:06.318952 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zmv6q" Feb 18 00:40:06 crc kubenswrapper[4858]: I0218 00:40:06.498460 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2j9qm" Feb 18 00:40:06 crc kubenswrapper[4858]: I0218 00:40:06.498769 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2j9qm" Feb 18 00:40:06 crc kubenswrapper[4858]: I0218 00:40:06.518887 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zmv6q" Feb 18 00:40:06 crc kubenswrapper[4858]: I0218 00:40:06.557974 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2j9qm" Feb 18 00:40:07 crc kubenswrapper[4858]: I0218 00:40:07.553654 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2j9qm" Feb 18 00:40:08 crc kubenswrapper[4858]: I0218 00:40:08.629882 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8qskx" Feb 18 00:40:08 crc kubenswrapper[4858]: I0218 00:40:08.630179 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8qskx" Feb 18 00:40:08 crc kubenswrapper[4858]: I0218 00:40:08.838028 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dpkwc" Feb 18 00:40:08 crc kubenswrapper[4858]: I0218 00:40:08.838459 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dpkwc" Feb 18 00:40:08 crc kubenswrapper[4858]: I0218 00:40:08.897203 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dpkwc" Feb 18 00:40:09 crc kubenswrapper[4858]: I0218 00:40:09.581222 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dpkwc" Feb 18 00:40:09 crc kubenswrapper[4858]: I0218 00:40:09.687824 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8qskx" podUID="b98459bf-9693-495a-ac0d-f46be8ea2df1" containerName="registry-server" probeResult="failure" output=< Feb 18 00:40:09 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Feb 18 00:40:09 crc kubenswrapper[4858]: > Feb 18 00:40:15 crc kubenswrapper[4858]: I0218 00:40:15.653753 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cb5674b59-rwlvs"] Feb 18 00:40:15 crc kubenswrapper[4858]: I0218 00:40:15.654887 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" podUID="59866999-d938-42df-8a70-7349af22ca1e" containerName="controller-manager" containerID="cri-o://02831c11de18a025bbc11a40276ddd872aae422d42105c9e147dda270d41f051" gracePeriod=30 Feb 18 00:40:16 crc kubenswrapper[4858]: I0218 00:40:16.180130 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" Feb 18 00:40:16 crc kubenswrapper[4858]: I0218 00:40:16.349868 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59866999-d938-42df-8a70-7349af22ca1e-client-ca\") pod \"59866999-d938-42df-8a70-7349af22ca1e\" (UID: \"59866999-d938-42df-8a70-7349af22ca1e\") " Feb 18 00:40:16 crc kubenswrapper[4858]: I0218 00:40:16.349926 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59866999-d938-42df-8a70-7349af22ca1e-config\") pod \"59866999-d938-42df-8a70-7349af22ca1e\" (UID: \"59866999-d938-42df-8a70-7349af22ca1e\") " Feb 18 00:40:16 crc kubenswrapper[4858]: I0218 00:40:16.349952 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59866999-d938-42df-8a70-7349af22ca1e-serving-cert\") pod \"59866999-d938-42df-8a70-7349af22ca1e\" (UID: \"59866999-d938-42df-8a70-7349af22ca1e\") " Feb 18 00:40:16 crc kubenswrapper[4858]: I0218 00:40:16.349971 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v245\" (UniqueName: \"kubernetes.io/projected/59866999-d938-42df-8a70-7349af22ca1e-kube-api-access-6v245\") pod \"59866999-d938-42df-8a70-7349af22ca1e\" (UID: \"59866999-d938-42df-8a70-7349af22ca1e\") " Feb 18 00:40:16 crc kubenswrapper[4858]: I0218 00:40:16.350009 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59866999-d938-42df-8a70-7349af22ca1e-proxy-ca-bundles\") pod \"59866999-d938-42df-8a70-7349af22ca1e\" (UID: \"59866999-d938-42df-8a70-7349af22ca1e\") " Feb 18 00:40:16 crc kubenswrapper[4858]: I0218 00:40:16.350827 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59866999-d938-42df-8a70-7349af22ca1e-client-ca" (OuterVolumeSpecName: "client-ca") pod "59866999-d938-42df-8a70-7349af22ca1e" (UID: "59866999-d938-42df-8a70-7349af22ca1e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:40:16 crc kubenswrapper[4858]: I0218 00:40:16.350899 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59866999-d938-42df-8a70-7349af22ca1e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "59866999-d938-42df-8a70-7349af22ca1e" (UID: "59866999-d938-42df-8a70-7349af22ca1e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:40:16 crc kubenswrapper[4858]: I0218 00:40:16.350927 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59866999-d938-42df-8a70-7349af22ca1e-config" (OuterVolumeSpecName: "config") pod "59866999-d938-42df-8a70-7349af22ca1e" (UID: "59866999-d938-42df-8a70-7349af22ca1e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:40:16 crc kubenswrapper[4858]: I0218 00:40:16.355417 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59866999-d938-42df-8a70-7349af22ca1e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "59866999-d938-42df-8a70-7349af22ca1e" (UID: "59866999-d938-42df-8a70-7349af22ca1e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:40:16 crc kubenswrapper[4858]: I0218 00:40:16.355535 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59866999-d938-42df-8a70-7349af22ca1e-kube-api-access-6v245" (OuterVolumeSpecName: "kube-api-access-6v245") pod "59866999-d938-42df-8a70-7349af22ca1e" (UID: "59866999-d938-42df-8a70-7349af22ca1e"). InnerVolumeSpecName "kube-api-access-6v245". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:40:16 crc kubenswrapper[4858]: I0218 00:40:16.451194 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59866999-d938-42df-8a70-7349af22ca1e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:40:16 crc kubenswrapper[4858]: I0218 00:40:16.451232 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59866999-d938-42df-8a70-7349af22ca1e-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:40:16 crc kubenswrapper[4858]: I0218 00:40:16.451244 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59866999-d938-42df-8a70-7349af22ca1e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:40:16 crc kubenswrapper[4858]: I0218 00:40:16.451280 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v245\" (UniqueName: \"kubernetes.io/projected/59866999-d938-42df-8a70-7349af22ca1e-kube-api-access-6v245\") on node \"crc\" DevicePath \"\"" Feb 18 00:40:16 crc kubenswrapper[4858]: I0218 00:40:16.451295 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59866999-d938-42df-8a70-7349af22ca1e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 00:40:16 crc kubenswrapper[4858]: I0218 00:40:16.560342 4858 generic.go:334] "Generic (PLEG): container finished" podID="59866999-d938-42df-8a70-7349af22ca1e" containerID="02831c11de18a025bbc11a40276ddd872aae422d42105c9e147dda270d41f051" exitCode=0 Feb 18 00:40:16 crc kubenswrapper[4858]: I0218 00:40:16.560413 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" event={"ID":"59866999-d938-42df-8a70-7349af22ca1e","Type":"ContainerDied","Data":"02831c11de18a025bbc11a40276ddd872aae422d42105c9e147dda270d41f051"} Feb 18 00:40:16 crc kubenswrapper[4858]: I0218 00:40:16.560471 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" event={"ID":"59866999-d938-42df-8a70-7349af22ca1e","Type":"ContainerDied","Data":"5b912592d7ca68ea90fac65dc6f4e57655ab36782c0459b5ecd2f2ecfc0b9f06"} Feb 18 00:40:16 crc kubenswrapper[4858]: I0218 00:40:16.560539 4858 scope.go:117] "RemoveContainer" containerID="02831c11de18a025bbc11a40276ddd872aae422d42105c9e147dda270d41f051" Feb 18 00:40:16 crc kubenswrapper[4858]: I0218 00:40:16.560714 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.462267 4858 scope.go:117] "RemoveContainer" containerID="02831c11de18a025bbc11a40276ddd872aae422d42105c9e147dda270d41f051" Feb 18 00:40:17 crc kubenswrapper[4858]: E0218 00:40:17.463111 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02831c11de18a025bbc11a40276ddd872aae422d42105c9e147dda270d41f051\": container with ID starting with 02831c11de18a025bbc11a40276ddd872aae422d42105c9e147dda270d41f051 not found: ID does not exist" containerID="02831c11de18a025bbc11a40276ddd872aae422d42105c9e147dda270d41f051" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.463137 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02831c11de18a025bbc11a40276ddd872aae422d42105c9e147dda270d41f051"} err="failed to get container status \"02831c11de18a025bbc11a40276ddd872aae422d42105c9e147dda270d41f051\": rpc error: code = NotFound desc = could not find container \"02831c11de18a025bbc11a40276ddd872aae422d42105c9e147dda270d41f051\": container with ID starting with 02831c11de18a025bbc11a40276ddd872aae422d42105c9e147dda270d41f051 not found: ID does not exist" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.673820 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c5cfd5f7b-njj6k"] Feb 18 00:40:17 crc kubenswrapper[4858]: E0218 00:40:17.674216 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59866999-d938-42df-8a70-7349af22ca1e" containerName="controller-manager" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.674300 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="59866999-d938-42df-8a70-7349af22ca1e" containerName="controller-manager" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.674475 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="59866999-d938-42df-8a70-7349af22ca1e" containerName="controller-manager" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.674949 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-njj6k" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.677145 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.677548 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.677672 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.677992 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.678943 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.685643 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.690007 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c5cfd5f7b-njj6k"] Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.704030 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.869214 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cdd923ae-7208-40fd-9502-0f3c57dad8e6-proxy-ca-bundles\") pod \"controller-manager-7c5cfd5f7b-njj6k\" (UID: \"cdd923ae-7208-40fd-9502-0f3c57dad8e6\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-njj6k" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.869338 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmsnl\" (UniqueName: \"kubernetes.io/projected/cdd923ae-7208-40fd-9502-0f3c57dad8e6-kube-api-access-nmsnl\") pod \"controller-manager-7c5cfd5f7b-njj6k\" (UID: \"cdd923ae-7208-40fd-9502-0f3c57dad8e6\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-njj6k" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.869389 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cdd923ae-7208-40fd-9502-0f3c57dad8e6-client-ca\") pod \"controller-manager-7c5cfd5f7b-njj6k\" (UID: \"cdd923ae-7208-40fd-9502-0f3c57dad8e6\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-njj6k" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.869437 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdd923ae-7208-40fd-9502-0f3c57dad8e6-serving-cert\") pod \"controller-manager-7c5cfd5f7b-njj6k\" (UID: \"cdd923ae-7208-40fd-9502-0f3c57dad8e6\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-njj6k" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.869468 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd923ae-7208-40fd-9502-0f3c57dad8e6-config\") pod \"controller-manager-7c5cfd5f7b-njj6k\" (UID: \"cdd923ae-7208-40fd-9502-0f3c57dad8e6\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-njj6k" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.970526 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmsnl\" (UniqueName: \"kubernetes.io/projected/cdd923ae-7208-40fd-9502-0f3c57dad8e6-kube-api-access-nmsnl\") pod \"controller-manager-7c5cfd5f7b-njj6k\" (UID: \"cdd923ae-7208-40fd-9502-0f3c57dad8e6\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-njj6k" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.970919 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cdd923ae-7208-40fd-9502-0f3c57dad8e6-client-ca\") pod \"controller-manager-7c5cfd5f7b-njj6k\" (UID: \"cdd923ae-7208-40fd-9502-0f3c57dad8e6\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-njj6k" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.970969 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdd923ae-7208-40fd-9502-0f3c57dad8e6-serving-cert\") pod \"controller-manager-7c5cfd5f7b-njj6k\" (UID: \"cdd923ae-7208-40fd-9502-0f3c57dad8e6\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-njj6k" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.971004 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd923ae-7208-40fd-9502-0f3c57dad8e6-config\") pod \"controller-manager-7c5cfd5f7b-njj6k\" (UID: \"cdd923ae-7208-40fd-9502-0f3c57dad8e6\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-njj6k" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.971080 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cdd923ae-7208-40fd-9502-0f3c57dad8e6-proxy-ca-bundles\") pod \"controller-manager-7c5cfd5f7b-njj6k\" (UID: \"cdd923ae-7208-40fd-9502-0f3c57dad8e6\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-njj6k" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.972591 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cdd923ae-7208-40fd-9502-0f3c57dad8e6-client-ca\") pod \"controller-manager-7c5cfd5f7b-njj6k\" (UID: \"cdd923ae-7208-40fd-9502-0f3c57dad8e6\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-njj6k" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.973675 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cdd923ae-7208-40fd-9502-0f3c57dad8e6-proxy-ca-bundles\") pod \"controller-manager-7c5cfd5f7b-njj6k\" (UID: \"cdd923ae-7208-40fd-9502-0f3c57dad8e6\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-njj6k" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.974941 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd923ae-7208-40fd-9502-0f3c57dad8e6-config\") pod \"controller-manager-7c5cfd5f7b-njj6k\" (UID: \"cdd923ae-7208-40fd-9502-0f3c57dad8e6\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-njj6k" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.980000 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdd923ae-7208-40fd-9502-0f3c57dad8e6-serving-cert\") pod \"controller-manager-7c5cfd5f7b-njj6k\" (UID: \"cdd923ae-7208-40fd-9502-0f3c57dad8e6\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-njj6k" Feb 18 00:40:17 crc kubenswrapper[4858]: I0218 00:40:17.990031 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmsnl\" (UniqueName: \"kubernetes.io/projected/cdd923ae-7208-40fd-9502-0f3c57dad8e6-kube-api-access-nmsnl\") pod \"controller-manager-7c5cfd5f7b-njj6k\" (UID: \"cdd923ae-7208-40fd-9502-0f3c57dad8e6\") " pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-njj6k" Feb 18 00:40:18 crc kubenswrapper[4858]: I0218 00:40:18.290269 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-njj6k" Feb 18 00:40:18 crc kubenswrapper[4858]: I0218 00:40:18.719034 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8qskx" Feb 18 00:40:18 crc kubenswrapper[4858]: I0218 00:40:18.765276 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c5cfd5f7b-njj6k"] Feb 18 00:40:18 crc kubenswrapper[4858]: I0218 00:40:18.790023 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8qskx" Feb 18 00:40:19 crc kubenswrapper[4858]: I0218 00:40:19.578104 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-njj6k" event={"ID":"cdd923ae-7208-40fd-9502-0f3c57dad8e6","Type":"ContainerStarted","Data":"06aaa6aaee4d7ac8d963bb3187e4ef161eeea4322560f9793e47c1d650832a31"} Feb 18 00:40:19 crc kubenswrapper[4858]: I0218 00:40:19.578594 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-njj6k" event={"ID":"cdd923ae-7208-40fd-9502-0f3c57dad8e6","Type":"ContainerStarted","Data":"eb4634cb5dca638e359afc104a4fb0cccd1ef8d01732a33d2dd878c51fc73af6"} Feb 18 00:40:19 crc kubenswrapper[4858]: I0218 00:40:19.596628 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-njj6k" podStartSLOduration=4.596605682 podStartE2EDuration="4.596605682s" podCreationTimestamp="2026-02-18 00:40:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:40:19.594262393 +0000 UTC m=+372.900099125" watchObservedRunningTime="2026-02-18 00:40:19.596605682 +0000 UTC m=+372.902442434" Feb 18 00:40:19 crc kubenswrapper[4858]: I0218 00:40:19.961295 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hfklb"] Feb 18 00:40:19 crc kubenswrapper[4858]: I0218 00:40:19.961990 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:19 crc kubenswrapper[4858]: I0218 00:40:19.973647 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hfklb"] Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.098677 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e1dd72b9-f5fe-43b9-98e4-fb8e9592532e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hfklb\" (UID: \"e1dd72b9-f5fe-43b9-98e4-fb8e9592532e\") " pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.098828 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1dd72b9-f5fe-43b9-98e4-fb8e9592532e-trusted-ca\") pod \"image-registry-66df7c8f76-hfklb\" (UID: \"e1dd72b9-f5fe-43b9-98e4-fb8e9592532e\") " pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.099055 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpd5w\" (UniqueName: \"kubernetes.io/projected/e1dd72b9-f5fe-43b9-98e4-fb8e9592532e-kube-api-access-hpd5w\") pod \"image-registry-66df7c8f76-hfklb\" (UID: \"e1dd72b9-f5fe-43b9-98e4-fb8e9592532e\") " pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.099095 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e1dd72b9-f5fe-43b9-98e4-fb8e9592532e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hfklb\" (UID: \"e1dd72b9-f5fe-43b9-98e4-fb8e9592532e\") " pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.099129 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1dd72b9-f5fe-43b9-98e4-fb8e9592532e-bound-sa-token\") pod \"image-registry-66df7c8f76-hfklb\" (UID: \"e1dd72b9-f5fe-43b9-98e4-fb8e9592532e\") " pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.099165 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hfklb\" (UID: \"e1dd72b9-f5fe-43b9-98e4-fb8e9592532e\") " pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.099239 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1dd72b9-f5fe-43b9-98e4-fb8e9592532e-registry-tls\") pod \"image-registry-66df7c8f76-hfklb\" (UID: \"e1dd72b9-f5fe-43b9-98e4-fb8e9592532e\") " pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.099311 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e1dd72b9-f5fe-43b9-98e4-fb8e9592532e-registry-certificates\") pod \"image-registry-66df7c8f76-hfklb\" (UID: \"e1dd72b9-f5fe-43b9-98e4-fb8e9592532e\") " pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.121031 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hfklb\" (UID: \"e1dd72b9-f5fe-43b9-98e4-fb8e9592532e\") " pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.200094 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e1dd72b9-f5fe-43b9-98e4-fb8e9592532e-registry-certificates\") pod \"image-registry-66df7c8f76-hfklb\" (UID: \"e1dd72b9-f5fe-43b9-98e4-fb8e9592532e\") " pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.200161 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e1dd72b9-f5fe-43b9-98e4-fb8e9592532e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hfklb\" (UID: \"e1dd72b9-f5fe-43b9-98e4-fb8e9592532e\") " pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.200195 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1dd72b9-f5fe-43b9-98e4-fb8e9592532e-trusted-ca\") pod \"image-registry-66df7c8f76-hfklb\" (UID: \"e1dd72b9-f5fe-43b9-98e4-fb8e9592532e\") " pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.200247 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpd5w\" (UniqueName: \"kubernetes.io/projected/e1dd72b9-f5fe-43b9-98e4-fb8e9592532e-kube-api-access-hpd5w\") pod \"image-registry-66df7c8f76-hfklb\" (UID: \"e1dd72b9-f5fe-43b9-98e4-fb8e9592532e\") " pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.200265 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e1dd72b9-f5fe-43b9-98e4-fb8e9592532e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hfklb\" (UID: \"e1dd72b9-f5fe-43b9-98e4-fb8e9592532e\") " pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.200283 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1dd72b9-f5fe-43b9-98e4-fb8e9592532e-registry-tls\") pod \"image-registry-66df7c8f76-hfklb\" (UID: \"e1dd72b9-f5fe-43b9-98e4-fb8e9592532e\") " pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.200297 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1dd72b9-f5fe-43b9-98e4-fb8e9592532e-bound-sa-token\") pod \"image-registry-66df7c8f76-hfklb\" (UID: \"e1dd72b9-f5fe-43b9-98e4-fb8e9592532e\") " pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.201244 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e1dd72b9-f5fe-43b9-98e4-fb8e9592532e-registry-certificates\") pod \"image-registry-66df7c8f76-hfklb\" (UID: \"e1dd72b9-f5fe-43b9-98e4-fb8e9592532e\") " pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.201251 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e1dd72b9-f5fe-43b9-98e4-fb8e9592532e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hfklb\" (UID: \"e1dd72b9-f5fe-43b9-98e4-fb8e9592532e\") " pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.202129 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1dd72b9-f5fe-43b9-98e4-fb8e9592532e-trusted-ca\") pod \"image-registry-66df7c8f76-hfklb\" (UID: \"e1dd72b9-f5fe-43b9-98e4-fb8e9592532e\") " pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.207100 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e1dd72b9-f5fe-43b9-98e4-fb8e9592532e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hfklb\" (UID: \"e1dd72b9-f5fe-43b9-98e4-fb8e9592532e\") " pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.214897 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1dd72b9-f5fe-43b9-98e4-fb8e9592532e-registry-tls\") pod \"image-registry-66df7c8f76-hfklb\" (UID: \"e1dd72b9-f5fe-43b9-98e4-fb8e9592532e\") " pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.217442 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpd5w\" (UniqueName: \"kubernetes.io/projected/e1dd72b9-f5fe-43b9-98e4-fb8e9592532e-kube-api-access-hpd5w\") pod \"image-registry-66df7c8f76-hfklb\" (UID: \"e1dd72b9-f5fe-43b9-98e4-fb8e9592532e\") " pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.227658 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1dd72b9-f5fe-43b9-98e4-fb8e9592532e-bound-sa-token\") pod \"image-registry-66df7c8f76-hfklb\" (UID: \"e1dd72b9-f5fe-43b9-98e4-fb8e9592532e\") " pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.280928 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.583806 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-njj6k" Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.592427 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c5cfd5f7b-njj6k" Feb 18 00:40:20 crc kubenswrapper[4858]: I0218 00:40:20.704658 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hfklb"] Feb 18 00:40:20 crc kubenswrapper[4858]: W0218 00:40:20.715745 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1dd72b9_f5fe_43b9_98e4_fb8e9592532e.slice/crio-eecb20e0866095d1ad27bd5ff0e847fe86a5e9b8d84fe591678888910ef7fc52 WatchSource:0}: Error finding container eecb20e0866095d1ad27bd5ff0e847fe86a5e9b8d84fe591678888910ef7fc52: Status 404 returned error can't find the container with id eecb20e0866095d1ad27bd5ff0e847fe86a5e9b8d84fe591678888910ef7fc52 Feb 18 00:40:21 crc kubenswrapper[4858]: I0218 00:40:21.590561 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" event={"ID":"e1dd72b9-f5fe-43b9-98e4-fb8e9592532e","Type":"ContainerStarted","Data":"8b39e259d778611b0f8d5663e4fd880f7d23144cdf3e99c47e2501b16aea2cb2"} Feb 18 00:40:21 crc kubenswrapper[4858]: I0218 00:40:21.590612 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" event={"ID":"e1dd72b9-f5fe-43b9-98e4-fb8e9592532e","Type":"ContainerStarted","Data":"eecb20e0866095d1ad27bd5ff0e847fe86a5e9b8d84fe591678888910ef7fc52"} Feb 18 00:40:21 crc kubenswrapper[4858]: I0218 00:40:21.590746 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:21 crc kubenswrapper[4858]: I0218 00:40:21.610951 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" podStartSLOduration=2.610892903 podStartE2EDuration="2.610892903s" podCreationTimestamp="2026-02-18 00:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:40:21.609664741 +0000 UTC m=+374.915501493" watchObservedRunningTime="2026-02-18 00:40:21.610892903 +0000 UTC m=+374.916729655" Feb 18 00:40:25 crc kubenswrapper[4858]: I0218 00:40:25.264948 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:40:25 crc kubenswrapper[4858]: I0218 00:40:25.265664 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:40:40 crc kubenswrapper[4858]: I0218 00:40:40.285802 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-hfklb" Feb 18 00:40:40 crc kubenswrapper[4858]: I0218 00:40:40.347638 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qf4s8"] Feb 18 00:40:47 crc kubenswrapper[4858]: I0218 00:40:47.452791 4858 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod59866999-d938-42df-8a70-7349af22ca1e"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod59866999-d938-42df-8a70-7349af22ca1e] : Timed out while waiting for systemd to remove kubepods-burstable-pod59866999_d938_42df_8a70_7349af22ca1e.slice" Feb 18 00:40:47 crc kubenswrapper[4858]: E0218 00:40:47.453308 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod59866999-d938-42df-8a70-7349af22ca1e] : unable to destroy cgroup paths for cgroup [kubepods burstable pod59866999-d938-42df-8a70-7349af22ca1e] : Timed out while waiting for systemd to remove kubepods-burstable-pod59866999_d938_42df_8a70_7349af22ca1e.slice" pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" podUID="59866999-d938-42df-8a70-7349af22ca1e" Feb 18 00:40:47 crc kubenswrapper[4858]: I0218 00:40:47.751457 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb5674b59-rwlvs" Feb 18 00:40:47 crc kubenswrapper[4858]: I0218 00:40:47.782382 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cb5674b59-rwlvs"] Feb 18 00:40:47 crc kubenswrapper[4858]: I0218 00:40:47.788808 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6cb5674b59-rwlvs"] Feb 18 00:40:49 crc kubenswrapper[4858]: I0218 00:40:49.427878 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59866999-d938-42df-8a70-7349af22ca1e" path="/var/lib/kubelet/pods/59866999-d938-42df-8a70-7349af22ca1e/volumes" Feb 18 00:40:55 crc kubenswrapper[4858]: I0218 00:40:55.265475 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:40:55 crc kubenswrapper[4858]: I0218 00:40:55.265882 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:40:55 crc kubenswrapper[4858]: I0218 00:40:55.265943 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 00:40:55 crc kubenswrapper[4858]: I0218 00:40:55.266718 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dd89a9972872421e0ada8f4abaeaad4802dc5c9d7697434f5a0c48b333e5af6b"} pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:40:55 crc kubenswrapper[4858]: I0218 00:40:55.266802 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" containerID="cri-o://dd89a9972872421e0ada8f4abaeaad4802dc5c9d7697434f5a0c48b333e5af6b" gracePeriod=600 Feb 18 00:40:55 crc kubenswrapper[4858]: I0218 00:40:55.810235 4858 generic.go:334] "Generic (PLEG): container finished" podID="7172df49-6116-4968-a2b5-a1afb116568b" containerID="dd89a9972872421e0ada8f4abaeaad4802dc5c9d7697434f5a0c48b333e5af6b" exitCode=0 Feb 18 00:40:55 crc kubenswrapper[4858]: I0218 00:40:55.810337 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerDied","Data":"dd89a9972872421e0ada8f4abaeaad4802dc5c9d7697434f5a0c48b333e5af6b"} Feb 18 00:40:55 crc kubenswrapper[4858]: I0218 00:40:55.810681 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerStarted","Data":"8c79037d14a94a75ba333833fe3bdef198d4e97042dfa17705bf757bc2a57baf"} Feb 18 00:40:55 crc kubenswrapper[4858]: I0218 00:40:55.810711 4858 scope.go:117] "RemoveContainer" containerID="50715c79dfab91376dfb60e064758477c21ff85622e8f6824a867a33691f4645" Feb 18 00:41:05 crc kubenswrapper[4858]: I0218 00:41:05.389554 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" podUID="329e20a2-8966-48c0-8300-bc996770880d" containerName="registry" containerID="cri-o://36a3df7e07c741e73366ffd3fa0cd0f165970a5c334c841d30c0867f8fb91ff8" gracePeriod=30 Feb 18 00:41:05 crc kubenswrapper[4858]: I0218 00:41:05.879643 4858 generic.go:334] "Generic (PLEG): container finished" podID="329e20a2-8966-48c0-8300-bc996770880d" containerID="36a3df7e07c741e73366ffd3fa0cd0f165970a5c334c841d30c0867f8fb91ff8" exitCode=0 Feb 18 00:41:05 crc kubenswrapper[4858]: I0218 00:41:05.879707 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" event={"ID":"329e20a2-8966-48c0-8300-bc996770880d","Type":"ContainerDied","Data":"36a3df7e07c741e73366ffd3fa0cd0f165970a5c334c841d30c0867f8fb91ff8"} Feb 18 00:41:05 crc kubenswrapper[4858]: I0218 00:41:05.933187 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:41:05 crc kubenswrapper[4858]: I0218 00:41:05.975084 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpvb2\" (UniqueName: \"kubernetes.io/projected/329e20a2-8966-48c0-8300-bc996770880d-kube-api-access-vpvb2\") pod \"329e20a2-8966-48c0-8300-bc996770880d\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " Feb 18 00:41:05 crc kubenswrapper[4858]: I0218 00:41:05.975191 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/329e20a2-8966-48c0-8300-bc996770880d-ca-trust-extracted\") pod \"329e20a2-8966-48c0-8300-bc996770880d\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " Feb 18 00:41:05 crc kubenswrapper[4858]: I0218 00:41:05.975277 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/329e20a2-8966-48c0-8300-bc996770880d-trusted-ca\") pod \"329e20a2-8966-48c0-8300-bc996770880d\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " Feb 18 00:41:05 crc kubenswrapper[4858]: I0218 00:41:05.975327 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/329e20a2-8966-48c0-8300-bc996770880d-bound-sa-token\") pod \"329e20a2-8966-48c0-8300-bc996770880d\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " Feb 18 00:41:05 crc kubenswrapper[4858]: I0218 00:41:05.975375 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/329e20a2-8966-48c0-8300-bc996770880d-installation-pull-secrets\") pod \"329e20a2-8966-48c0-8300-bc996770880d\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " Feb 18 00:41:05 crc kubenswrapper[4858]: I0218 00:41:05.975427 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/329e20a2-8966-48c0-8300-bc996770880d-registry-tls\") pod \"329e20a2-8966-48c0-8300-bc996770880d\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " Feb 18 00:41:05 crc kubenswrapper[4858]: I0218 00:41:05.975724 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"329e20a2-8966-48c0-8300-bc996770880d\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " Feb 18 00:41:05 crc kubenswrapper[4858]: I0218 00:41:05.975830 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/329e20a2-8966-48c0-8300-bc996770880d-registry-certificates\") pod \"329e20a2-8966-48c0-8300-bc996770880d\" (UID: \"329e20a2-8966-48c0-8300-bc996770880d\") " Feb 18 00:41:05 crc kubenswrapper[4858]: I0218 00:41:05.977553 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/329e20a2-8966-48c0-8300-bc996770880d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "329e20a2-8966-48c0-8300-bc996770880d" (UID: "329e20a2-8966-48c0-8300-bc996770880d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:41:05 crc kubenswrapper[4858]: I0218 00:41:05.978638 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/329e20a2-8966-48c0-8300-bc996770880d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "329e20a2-8966-48c0-8300-bc996770880d" (UID: "329e20a2-8966-48c0-8300-bc996770880d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:41:05 crc kubenswrapper[4858]: I0218 00:41:05.984383 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/329e20a2-8966-48c0-8300-bc996770880d-kube-api-access-vpvb2" (OuterVolumeSpecName: "kube-api-access-vpvb2") pod "329e20a2-8966-48c0-8300-bc996770880d" (UID: "329e20a2-8966-48c0-8300-bc996770880d"). InnerVolumeSpecName "kube-api-access-vpvb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:41:05 crc kubenswrapper[4858]: I0218 00:41:05.986306 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/329e20a2-8966-48c0-8300-bc996770880d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "329e20a2-8966-48c0-8300-bc996770880d" (UID: "329e20a2-8966-48c0-8300-bc996770880d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:41:05 crc kubenswrapper[4858]: I0218 00:41:05.987005 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/329e20a2-8966-48c0-8300-bc996770880d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "329e20a2-8966-48c0-8300-bc996770880d" (UID: "329e20a2-8966-48c0-8300-bc996770880d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:41:05 crc kubenswrapper[4858]: I0218 00:41:05.987899 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/329e20a2-8966-48c0-8300-bc996770880d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "329e20a2-8966-48c0-8300-bc996770880d" (UID: "329e20a2-8966-48c0-8300-bc996770880d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:41:05 crc kubenswrapper[4858]: I0218 00:41:05.997392 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "329e20a2-8966-48c0-8300-bc996770880d" (UID: "329e20a2-8966-48c0-8300-bc996770880d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 00:41:06 crc kubenswrapper[4858]: I0218 00:41:06.012700 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/329e20a2-8966-48c0-8300-bc996770880d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "329e20a2-8966-48c0-8300-bc996770880d" (UID: "329e20a2-8966-48c0-8300-bc996770880d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:41:06 crc kubenswrapper[4858]: I0218 00:41:06.078110 4858 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/329e20a2-8966-48c0-8300-bc996770880d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 00:41:06 crc kubenswrapper[4858]: I0218 00:41:06.078161 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/329e20a2-8966-48c0-8300-bc996770880d-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:41:06 crc kubenswrapper[4858]: I0218 00:41:06.078180 4858 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/329e20a2-8966-48c0-8300-bc996770880d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 00:41:06 crc kubenswrapper[4858]: I0218 00:41:06.078199 4858 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/329e20a2-8966-48c0-8300-bc996770880d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 00:41:06 crc kubenswrapper[4858]: I0218 00:41:06.078222 4858 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/329e20a2-8966-48c0-8300-bc996770880d-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:41:06 crc kubenswrapper[4858]: I0218 00:41:06.078240 4858 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/329e20a2-8966-48c0-8300-bc996770880d-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 00:41:06 crc kubenswrapper[4858]: I0218 00:41:06.078259 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpvb2\" (UniqueName: \"kubernetes.io/projected/329e20a2-8966-48c0-8300-bc996770880d-kube-api-access-vpvb2\") on node \"crc\" DevicePath \"\"" Feb 18 00:41:06 crc kubenswrapper[4858]: I0218 00:41:06.890909 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" event={"ID":"329e20a2-8966-48c0-8300-bc996770880d","Type":"ContainerDied","Data":"c305c72451f30f5f83399690f166a3b55e7a900213068f4b37698d51721eb4bd"} Feb 18 00:41:06 crc kubenswrapper[4858]: I0218 00:41:06.890996 4858 scope.go:117] "RemoveContainer" containerID="36a3df7e07c741e73366ffd3fa0cd0f165970a5c334c841d30c0867f8fb91ff8" Feb 18 00:41:06 crc kubenswrapper[4858]: I0218 00:41:06.890994 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qf4s8" Feb 18 00:41:06 crc kubenswrapper[4858]: I0218 00:41:06.941278 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qf4s8"] Feb 18 00:41:06 crc kubenswrapper[4858]: I0218 00:41:06.947213 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qf4s8"] Feb 18 00:41:07 crc kubenswrapper[4858]: I0218 00:41:07.443223 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="329e20a2-8966-48c0-8300-bc996770880d" path="/var/lib/kubelet/pods/329e20a2-8966-48c0-8300-bc996770880d/volumes" Feb 18 00:42:04 crc kubenswrapper[4858]: I0218 00:42:04.821190 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-nmn6s" podUID="7cc6c0de-0fa4-4366-b66d-7e8753c27f9f" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:42:55 crc kubenswrapper[4858]: I0218 00:42:55.265232 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:42:55 crc kubenswrapper[4858]: I0218 00:42:55.266087 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:43:07 crc kubenswrapper[4858]: I0218 00:43:07.753441 4858 scope.go:117] "RemoveContainer" containerID="43cc232211a808de4ccadb4be357271e9ce5b36aa4013d7c83421c138e02db43" Feb 18 00:43:25 crc kubenswrapper[4858]: I0218 00:43:25.265365 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:43:25 crc kubenswrapper[4858]: I0218 00:43:25.266196 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:43:44 crc kubenswrapper[4858]: I0218 00:43:44.765740 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl"] Feb 18 00:43:44 crc kubenswrapper[4858]: E0218 00:43:44.766620 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="329e20a2-8966-48c0-8300-bc996770880d" containerName="registry" Feb 18 00:43:44 crc kubenswrapper[4858]: I0218 00:43:44.766635 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="329e20a2-8966-48c0-8300-bc996770880d" containerName="registry" Feb 18 00:43:44 crc kubenswrapper[4858]: I0218 00:43:44.766761 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="329e20a2-8966-48c0-8300-bc996770880d" containerName="registry" Feb 18 00:43:44 crc kubenswrapper[4858]: I0218 00:43:44.767911 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl" Feb 18 00:43:44 crc kubenswrapper[4858]: I0218 00:43:44.771883 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 00:43:44 crc kubenswrapper[4858]: I0218 00:43:44.775139 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl"] Feb 18 00:43:44 crc kubenswrapper[4858]: I0218 00:43:44.844935 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf8zn\" (UniqueName: \"kubernetes.io/projected/891969ef-ef73-4652-97d2-bc6a015fcdbd-kube-api-access-lf8zn\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl\" (UID: \"891969ef-ef73-4652-97d2-bc6a015fcdbd\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl" Feb 18 00:43:44 crc kubenswrapper[4858]: I0218 00:43:44.845040 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/891969ef-ef73-4652-97d2-bc6a015fcdbd-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl\" (UID: \"891969ef-ef73-4652-97d2-bc6a015fcdbd\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl" Feb 18 00:43:44 crc kubenswrapper[4858]: I0218 00:43:44.845094 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/891969ef-ef73-4652-97d2-bc6a015fcdbd-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl\" (UID: \"891969ef-ef73-4652-97d2-bc6a015fcdbd\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl" Feb 18 00:43:44 crc kubenswrapper[4858]: I0218 00:43:44.945679 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf8zn\" (UniqueName: \"kubernetes.io/projected/891969ef-ef73-4652-97d2-bc6a015fcdbd-kube-api-access-lf8zn\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl\" (UID: \"891969ef-ef73-4652-97d2-bc6a015fcdbd\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl" Feb 18 00:43:44 crc kubenswrapper[4858]: I0218 00:43:44.945760 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/891969ef-ef73-4652-97d2-bc6a015fcdbd-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl\" (UID: \"891969ef-ef73-4652-97d2-bc6a015fcdbd\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl" Feb 18 00:43:44 crc kubenswrapper[4858]: I0218 00:43:44.945786 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/891969ef-ef73-4652-97d2-bc6a015fcdbd-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl\" (UID: \"891969ef-ef73-4652-97d2-bc6a015fcdbd\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl" Feb 18 00:43:44 crc kubenswrapper[4858]: I0218 00:43:44.946174 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/891969ef-ef73-4652-97d2-bc6a015fcdbd-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl\" (UID: \"891969ef-ef73-4652-97d2-bc6a015fcdbd\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl" Feb 18 00:43:44 crc kubenswrapper[4858]: I0218 00:43:44.946657 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/891969ef-ef73-4652-97d2-bc6a015fcdbd-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl\" (UID: \"891969ef-ef73-4652-97d2-bc6a015fcdbd\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl" Feb 18 00:43:44 crc kubenswrapper[4858]: I0218 00:43:44.964134 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf8zn\" (UniqueName: \"kubernetes.io/projected/891969ef-ef73-4652-97d2-bc6a015fcdbd-kube-api-access-lf8zn\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl\" (UID: \"891969ef-ef73-4652-97d2-bc6a015fcdbd\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl" Feb 18 00:43:45 crc kubenswrapper[4858]: I0218 00:43:45.130113 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl" Feb 18 00:43:45 crc kubenswrapper[4858]: I0218 00:43:45.361749 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl"] Feb 18 00:43:45 crc kubenswrapper[4858]: I0218 00:43:45.983765 4858 generic.go:334] "Generic (PLEG): container finished" podID="891969ef-ef73-4652-97d2-bc6a015fcdbd" containerID="df277ef192b805ed0121e444c229ab691a8d65212a2cb76878a5a59be351628f" exitCode=0 Feb 18 00:43:45 crc kubenswrapper[4858]: I0218 00:43:45.983930 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl" event={"ID":"891969ef-ef73-4652-97d2-bc6a015fcdbd","Type":"ContainerDied","Data":"df277ef192b805ed0121e444c229ab691a8d65212a2cb76878a5a59be351628f"} Feb 18 00:43:45 crc kubenswrapper[4858]: I0218 00:43:45.984075 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl" event={"ID":"891969ef-ef73-4652-97d2-bc6a015fcdbd","Type":"ContainerStarted","Data":"6315eb2fc7e838f6df0807a13008da67d1e6e8fc3859bb728756b1a136575ac3"} Feb 18 00:43:45 crc kubenswrapper[4858]: I0218 00:43:45.986474 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 00:43:47 crc kubenswrapper[4858]: I0218 00:43:47.995120 4858 generic.go:334] "Generic (PLEG): container finished" podID="891969ef-ef73-4652-97d2-bc6a015fcdbd" containerID="3141019767adf8456db7feefde627ecde301e2c9cd8fbc329e98660296e82c79" exitCode=0 Feb 18 00:43:47 crc kubenswrapper[4858]: I0218 00:43:47.995174 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl" event={"ID":"891969ef-ef73-4652-97d2-bc6a015fcdbd","Type":"ContainerDied","Data":"3141019767adf8456db7feefde627ecde301e2c9cd8fbc329e98660296e82c79"} Feb 18 00:43:49 crc kubenswrapper[4858]: I0218 00:43:49.006354 4858 generic.go:334] "Generic (PLEG): container finished" podID="891969ef-ef73-4652-97d2-bc6a015fcdbd" containerID="742e70e14fdbb8d6aee75400aff290961e1324c33d2de59755f7e1605b4ff01f" exitCode=0 Feb 18 00:43:49 crc kubenswrapper[4858]: I0218 00:43:49.006417 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl" event={"ID":"891969ef-ef73-4652-97d2-bc6a015fcdbd","Type":"ContainerDied","Data":"742e70e14fdbb8d6aee75400aff290961e1324c33d2de59755f7e1605b4ff01f"} Feb 18 00:43:50 crc kubenswrapper[4858]: I0218 00:43:50.322752 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl" Feb 18 00:43:50 crc kubenswrapper[4858]: I0218 00:43:50.346125 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf8zn\" (UniqueName: \"kubernetes.io/projected/891969ef-ef73-4652-97d2-bc6a015fcdbd-kube-api-access-lf8zn\") pod \"891969ef-ef73-4652-97d2-bc6a015fcdbd\" (UID: \"891969ef-ef73-4652-97d2-bc6a015fcdbd\") " Feb 18 00:43:50 crc kubenswrapper[4858]: I0218 00:43:50.346166 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/891969ef-ef73-4652-97d2-bc6a015fcdbd-bundle\") pod \"891969ef-ef73-4652-97d2-bc6a015fcdbd\" (UID: \"891969ef-ef73-4652-97d2-bc6a015fcdbd\") " Feb 18 00:43:50 crc kubenswrapper[4858]: I0218 00:43:50.346185 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/891969ef-ef73-4652-97d2-bc6a015fcdbd-util\") pod \"891969ef-ef73-4652-97d2-bc6a015fcdbd\" (UID: \"891969ef-ef73-4652-97d2-bc6a015fcdbd\") " Feb 18 00:43:50 crc kubenswrapper[4858]: I0218 00:43:50.348691 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/891969ef-ef73-4652-97d2-bc6a015fcdbd-bundle" (OuterVolumeSpecName: "bundle") pod "891969ef-ef73-4652-97d2-bc6a015fcdbd" (UID: "891969ef-ef73-4652-97d2-bc6a015fcdbd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:43:50 crc kubenswrapper[4858]: I0218 00:43:50.352721 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/891969ef-ef73-4652-97d2-bc6a015fcdbd-kube-api-access-lf8zn" (OuterVolumeSpecName: "kube-api-access-lf8zn") pod "891969ef-ef73-4652-97d2-bc6a015fcdbd" (UID: "891969ef-ef73-4652-97d2-bc6a015fcdbd"). InnerVolumeSpecName "kube-api-access-lf8zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:43:50 crc kubenswrapper[4858]: I0218 00:43:50.376659 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/891969ef-ef73-4652-97d2-bc6a015fcdbd-util" (OuterVolumeSpecName: "util") pod "891969ef-ef73-4652-97d2-bc6a015fcdbd" (UID: "891969ef-ef73-4652-97d2-bc6a015fcdbd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:43:50 crc kubenswrapper[4858]: I0218 00:43:50.448148 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf8zn\" (UniqueName: \"kubernetes.io/projected/891969ef-ef73-4652-97d2-bc6a015fcdbd-kube-api-access-lf8zn\") on node \"crc\" DevicePath \"\"" Feb 18 00:43:50 crc kubenswrapper[4858]: I0218 00:43:50.448208 4858 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/891969ef-ef73-4652-97d2-bc6a015fcdbd-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:43:50 crc kubenswrapper[4858]: I0218 00:43:50.448235 4858 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/891969ef-ef73-4652-97d2-bc6a015fcdbd-util\") on node \"crc\" DevicePath \"\"" Feb 18 00:43:51 crc kubenswrapper[4858]: I0218 00:43:51.024945 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl" event={"ID":"891969ef-ef73-4652-97d2-bc6a015fcdbd","Type":"ContainerDied","Data":"6315eb2fc7e838f6df0807a13008da67d1e6e8fc3859bb728756b1a136575ac3"} Feb 18 00:43:51 crc kubenswrapper[4858]: I0218 00:43:51.025322 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6315eb2fc7e838f6df0807a13008da67d1e6e8fc3859bb728756b1a136575ac3" Feb 18 00:43:51 crc kubenswrapper[4858]: I0218 00:43:51.025024 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl" Feb 18 00:43:55 crc kubenswrapper[4858]: I0218 00:43:55.266008 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:43:55 crc kubenswrapper[4858]: I0218 00:43:55.266414 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:43:55 crc kubenswrapper[4858]: I0218 00:43:55.266487 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 00:43:55 crc kubenswrapper[4858]: I0218 00:43:55.267283 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8c79037d14a94a75ba333833fe3bdef198d4e97042dfa17705bf757bc2a57baf"} pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:43:55 crc kubenswrapper[4858]: I0218 00:43:55.267382 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" containerID="cri-o://8c79037d14a94a75ba333833fe3bdef198d4e97042dfa17705bf757bc2a57baf" gracePeriod=600 Feb 18 00:43:56 crc kubenswrapper[4858]: I0218 00:43:56.063525 4858 generic.go:334] "Generic (PLEG): container finished" podID="7172df49-6116-4968-a2b5-a1afb116568b" containerID="8c79037d14a94a75ba333833fe3bdef198d4e97042dfa17705bf757bc2a57baf" exitCode=0 Feb 18 00:43:56 crc kubenswrapper[4858]: I0218 00:43:56.063594 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerDied","Data":"8c79037d14a94a75ba333833fe3bdef198d4e97042dfa17705bf757bc2a57baf"} Feb 18 00:43:56 crc kubenswrapper[4858]: I0218 00:43:56.063896 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerStarted","Data":"010ce5804c87e4080adda85e7efd663cb5c56ffcff1160985ac0b6c58a57396c"} Feb 18 00:43:56 crc kubenswrapper[4858]: I0218 00:43:56.063915 4858 scope.go:117] "RemoveContainer" containerID="dd89a9972872421e0ada8f4abaeaad4802dc5c9d7697434f5a0c48b333e5af6b" Feb 18 00:43:56 crc kubenswrapper[4858]: I0218 00:43:56.344393 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jjq7k"] Feb 18 00:43:56 crc kubenswrapper[4858]: I0218 00:43:56.345113 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="ovn-controller" containerID="cri-o://3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c" gracePeriod=30 Feb 18 00:43:56 crc kubenswrapper[4858]: I0218 00:43:56.345608 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="sbdb" containerID="cri-o://4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c" gracePeriod=30 Feb 18 00:43:56 crc kubenswrapper[4858]: I0218 00:43:56.345766 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="nbdb" containerID="cri-o://fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9" gracePeriod=30 Feb 18 00:43:56 crc kubenswrapper[4858]: I0218 00:43:56.345944 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="northd" containerID="cri-o://06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e" gracePeriod=30 Feb 18 00:43:56 crc kubenswrapper[4858]: I0218 00:43:56.346026 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674" gracePeriod=30 Feb 18 00:43:56 crc kubenswrapper[4858]: I0218 00:43:56.346134 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="ovn-acl-logging" containerID="cri-o://c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990" gracePeriod=30 Feb 18 00:43:56 crc kubenswrapper[4858]: I0218 00:43:56.345991 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="kube-rbac-proxy-node" containerID="cri-o://36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4" gracePeriod=30 Feb 18 00:43:56 crc kubenswrapper[4858]: I0218 00:43:56.405161 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="ovnkube-controller" containerID="cri-o://19e70fa0770c17c46684d5759f3196c3d8f2f2c334f3870ed602967094fb84e1" gracePeriod=30 Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.069255 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sr8bs_631d8e25-82dd-4462-b98d-f076e7264b67/kube-multus/2.log" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.070061 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sr8bs_631d8e25-82dd-4462-b98d-f076e7264b67/kube-multus/1.log" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.070158 4858 generic.go:334] "Generic (PLEG): container finished" podID="631d8e25-82dd-4462-b98d-f076e7264b67" containerID="6df787764e784c8ac5e384a5692545df23cffb81f4ccfb4027bcea91c5242b7e" exitCode=2 Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.070290 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sr8bs" event={"ID":"631d8e25-82dd-4462-b98d-f076e7264b67","Type":"ContainerDied","Data":"6df787764e784c8ac5e384a5692545df23cffb81f4ccfb4027bcea91c5242b7e"} Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.070378 4858 scope.go:117] "RemoveContainer" containerID="1c99f89af61fd7a5275a9556f567446675b39146de6e8b14b5ec7c475c26a413" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.070861 4858 scope.go:117] "RemoveContainer" containerID="6df787764e784c8ac5e384a5692545df23cffb81f4ccfb4027bcea91c5242b7e" Feb 18 00:43:57 crc kubenswrapper[4858]: E0218 00:43:57.071155 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-sr8bs_openshift-multus(631d8e25-82dd-4462-b98d-f076e7264b67)\"" pod="openshift-multus/multus-sr8bs" podUID="631d8e25-82dd-4462-b98d-f076e7264b67" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.073856 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jjq7k_62c71780-47e7-4e14-9b93-60050f6f3141/ovnkube-controller/3.log" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.076281 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jjq7k_62c71780-47e7-4e14-9b93-60050f6f3141/ovn-acl-logging/0.log" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.076733 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jjq7k_62c71780-47e7-4e14-9b93-60050f6f3141/ovn-controller/0.log" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.077073 4858 generic.go:334] "Generic (PLEG): container finished" podID="62c71780-47e7-4e14-9b93-60050f6f3141" containerID="19e70fa0770c17c46684d5759f3196c3d8f2f2c334f3870ed602967094fb84e1" exitCode=0 Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.077143 4858 generic.go:334] "Generic (PLEG): container finished" podID="62c71780-47e7-4e14-9b93-60050f6f3141" containerID="4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c" exitCode=0 Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.077195 4858 generic.go:334] "Generic (PLEG): container finished" podID="62c71780-47e7-4e14-9b93-60050f6f3141" containerID="fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9" exitCode=0 Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.077255 4858 generic.go:334] "Generic (PLEG): container finished" podID="62c71780-47e7-4e14-9b93-60050f6f3141" containerID="06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e" exitCode=0 Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.077305 4858 generic.go:334] "Generic (PLEG): container finished" podID="62c71780-47e7-4e14-9b93-60050f6f3141" containerID="e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674" exitCode=0 Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.077355 4858 generic.go:334] "Generic (PLEG): container finished" podID="62c71780-47e7-4e14-9b93-60050f6f3141" containerID="36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4" exitCode=0 Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.077400 4858 generic.go:334] "Generic (PLEG): container finished" podID="62c71780-47e7-4e14-9b93-60050f6f3141" containerID="c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990" exitCode=143 Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.077448 4858 generic.go:334] "Generic (PLEG): container finished" podID="62c71780-47e7-4e14-9b93-60050f6f3141" containerID="3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c" exitCode=143 Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.077564 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerDied","Data":"19e70fa0770c17c46684d5759f3196c3d8f2f2c334f3870ed602967094fb84e1"} Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.077643 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerDied","Data":"4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c"} Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.077703 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerDied","Data":"fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9"} Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.077762 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerDied","Data":"06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e"} Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.077815 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerDied","Data":"e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674"} Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.077867 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerDied","Data":"36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4"} Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.077922 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerDied","Data":"c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990"} Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.077976 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerDied","Data":"3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c"} Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.097000 4858 scope.go:117] "RemoveContainer" containerID="654a3b3d31b37d872da214d23ebf83bf4ec4272b8ef12cd793af82bea158ce78" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.109459 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jjq7k_62c71780-47e7-4e14-9b93-60050f6f3141/ovn-acl-logging/0.log" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.110120 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jjq7k_62c71780-47e7-4e14-9b93-60050f6f3141/ovn-controller/0.log" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.110746 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185312 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8g2wf"] Feb 18 00:43:57 crc kubenswrapper[4858]: E0218 00:43:57.185548 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="ovnkube-controller" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185563 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="ovnkube-controller" Feb 18 00:43:57 crc kubenswrapper[4858]: E0218 00:43:57.185576 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="ovn-controller" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185583 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="ovn-controller" Feb 18 00:43:57 crc kubenswrapper[4858]: E0218 00:43:57.185594 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="kube-rbac-proxy-node" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185601 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="kube-rbac-proxy-node" Feb 18 00:43:57 crc kubenswrapper[4858]: E0218 00:43:57.185609 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185615 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 00:43:57 crc kubenswrapper[4858]: E0218 00:43:57.185627 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="ovnkube-controller" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185632 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="ovnkube-controller" Feb 18 00:43:57 crc kubenswrapper[4858]: E0218 00:43:57.185640 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="ovn-acl-logging" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185647 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="ovn-acl-logging" Feb 18 00:43:57 crc kubenswrapper[4858]: E0218 00:43:57.185655 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="891969ef-ef73-4652-97d2-bc6a015fcdbd" containerName="extract" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185660 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="891969ef-ef73-4652-97d2-bc6a015fcdbd" containerName="extract" Feb 18 00:43:57 crc kubenswrapper[4858]: E0218 00:43:57.185667 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="891969ef-ef73-4652-97d2-bc6a015fcdbd" containerName="pull" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185700 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="891969ef-ef73-4652-97d2-bc6a015fcdbd" containerName="pull" Feb 18 00:43:57 crc kubenswrapper[4858]: E0218 00:43:57.185707 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="northd" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185715 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="northd" Feb 18 00:43:57 crc kubenswrapper[4858]: E0218 00:43:57.185724 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="ovnkube-controller" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185732 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="ovnkube-controller" Feb 18 00:43:57 crc kubenswrapper[4858]: E0218 00:43:57.185741 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="kubecfg-setup" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185748 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="kubecfg-setup" Feb 18 00:43:57 crc kubenswrapper[4858]: E0218 00:43:57.185756 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="sbdb" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185765 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="sbdb" Feb 18 00:43:57 crc kubenswrapper[4858]: E0218 00:43:57.185776 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="891969ef-ef73-4652-97d2-bc6a015fcdbd" containerName="util" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185784 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="891969ef-ef73-4652-97d2-bc6a015fcdbd" containerName="util" Feb 18 00:43:57 crc kubenswrapper[4858]: E0218 00:43:57.185797 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="nbdb" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185804 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="nbdb" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185906 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185916 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="ovn-controller" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185929 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="northd" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185936 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="ovnkube-controller" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185944 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="891969ef-ef73-4652-97d2-bc6a015fcdbd" containerName="extract" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185950 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="ovnkube-controller" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185958 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="ovn-acl-logging" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185964 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="sbdb" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185972 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="kube-rbac-proxy-node" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185981 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="ovnkube-controller" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185986 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="nbdb" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.185995 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="ovnkube-controller" Feb 18 00:43:57 crc kubenswrapper[4858]: E0218 00:43:57.186076 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="ovnkube-controller" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.186085 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="ovnkube-controller" Feb 18 00:43:57 crc kubenswrapper[4858]: E0218 00:43:57.186095 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="ovnkube-controller" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.186100 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="ovnkube-controller" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.186173 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" containerName="ovnkube-controller" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.187673 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.254991 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-run-ovn-kubernetes\") pod \"62c71780-47e7-4e14-9b93-60050f6f3141\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.255354 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "62c71780-47e7-4e14-9b93-60050f6f3141" (UID: "62c71780-47e7-4e14-9b93-60050f6f3141"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.255519 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-run-openvswitch\") pod \"62c71780-47e7-4e14-9b93-60050f6f3141\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.255551 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-var-lib-openvswitch\") pod \"62c71780-47e7-4e14-9b93-60050f6f3141\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.255578 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/62c71780-47e7-4e14-9b93-60050f6f3141-env-overrides\") pod \"62c71780-47e7-4e14-9b93-60050f6f3141\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.255593 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-cni-netd\") pod \"62c71780-47e7-4e14-9b93-60050f6f3141\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.255607 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/62c71780-47e7-4e14-9b93-60050f6f3141-ovnkube-config\") pod \"62c71780-47e7-4e14-9b93-60050f6f3141\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.255623 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-run-ovn\") pod \"62c71780-47e7-4e14-9b93-60050f6f3141\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.255642 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/62c71780-47e7-4e14-9b93-60050f6f3141-ovn-node-metrics-cert\") pod \"62c71780-47e7-4e14-9b93-60050f6f3141\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.255663 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-cni-bin\") pod \"62c71780-47e7-4e14-9b93-60050f6f3141\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.255679 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-slash\") pod \"62c71780-47e7-4e14-9b93-60050f6f3141\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.255695 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-etc-openvswitch\") pod \"62c71780-47e7-4e14-9b93-60050f6f3141\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.255712 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-node-log\") pod \"62c71780-47e7-4e14-9b93-60050f6f3141\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.255727 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-run-systemd\") pod \"62c71780-47e7-4e14-9b93-60050f6f3141\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.255747 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dd5n\" (UniqueName: \"kubernetes.io/projected/62c71780-47e7-4e14-9b93-60050f6f3141-kube-api-access-5dd5n\") pod \"62c71780-47e7-4e14-9b93-60050f6f3141\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.255762 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-var-lib-cni-networks-ovn-kubernetes\") pod \"62c71780-47e7-4e14-9b93-60050f6f3141\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.255780 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-systemd-units\") pod \"62c71780-47e7-4e14-9b93-60050f6f3141\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.255798 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-run-netns\") pod \"62c71780-47e7-4e14-9b93-60050f6f3141\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.255816 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-log-socket\") pod \"62c71780-47e7-4e14-9b93-60050f6f3141\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.255837 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/62c71780-47e7-4e14-9b93-60050f6f3141-ovnkube-script-lib\") pod \"62c71780-47e7-4e14-9b93-60050f6f3141\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.255861 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-kubelet\") pod \"62c71780-47e7-4e14-9b93-60050f6f3141\" (UID: \"62c71780-47e7-4e14-9b93-60050f6f3141\") " Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.255930 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-node-log\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.255949 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-run-systemd\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.255965 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-ovn-node-metrics-cert\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.255980 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-host-slash\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.255996 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-host-cni-bin\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256013 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-log-socket\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256036 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-var-lib-openvswitch\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256050 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62c71780-47e7-4e14-9b93-60050f6f3141-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "62c71780-47e7-4e14-9b93-60050f6f3141" (UID: "62c71780-47e7-4e14-9b93-60050f6f3141"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256058 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-env-overrides\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256084 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "62c71780-47e7-4e14-9b93-60050f6f3141" (UID: "62c71780-47e7-4e14-9b93-60050f6f3141"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256091 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fffs\" (UniqueName: \"kubernetes.io/projected/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-kube-api-access-6fffs\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256109 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-run-ovn\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256126 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-etc-openvswitch\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256144 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-host-kubelet\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256162 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-systemd-units\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256177 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-ovnkube-script-lib\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256193 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-host-cni-netd\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256208 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-host-run-ovn-kubernetes\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256222 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-host-run-netns\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256238 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256263 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-run-openvswitch\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256286 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-ovnkube-config\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256313 4858 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256325 4858 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/62c71780-47e7-4e14-9b93-60050f6f3141-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256333 4858 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256332 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62c71780-47e7-4e14-9b93-60050f6f3141-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "62c71780-47e7-4e14-9b93-60050f6f3141" (UID: "62c71780-47e7-4e14-9b93-60050f6f3141"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256360 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "62c71780-47e7-4e14-9b93-60050f6f3141" (UID: "62c71780-47e7-4e14-9b93-60050f6f3141"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256348 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "62c71780-47e7-4e14-9b93-60050f6f3141" (UID: "62c71780-47e7-4e14-9b93-60050f6f3141"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256380 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "62c71780-47e7-4e14-9b93-60050f6f3141" (UID: "62c71780-47e7-4e14-9b93-60050f6f3141"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256455 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-log-socket" (OuterVolumeSpecName: "log-socket") pod "62c71780-47e7-4e14-9b93-60050f6f3141" (UID: "62c71780-47e7-4e14-9b93-60050f6f3141"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256478 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "62c71780-47e7-4e14-9b93-60050f6f3141" (UID: "62c71780-47e7-4e14-9b93-60050f6f3141"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256519 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-slash" (OuterVolumeSpecName: "host-slash") pod "62c71780-47e7-4e14-9b93-60050f6f3141" (UID: "62c71780-47e7-4e14-9b93-60050f6f3141"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256542 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "62c71780-47e7-4e14-9b93-60050f6f3141" (UID: "62c71780-47e7-4e14-9b93-60050f6f3141"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.256565 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-node-log" (OuterVolumeSpecName: "node-log") pod "62c71780-47e7-4e14-9b93-60050f6f3141" (UID: "62c71780-47e7-4e14-9b93-60050f6f3141"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.257581 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62c71780-47e7-4e14-9b93-60050f6f3141-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "62c71780-47e7-4e14-9b93-60050f6f3141" (UID: "62c71780-47e7-4e14-9b93-60050f6f3141"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.257706 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "62c71780-47e7-4e14-9b93-60050f6f3141" (UID: "62c71780-47e7-4e14-9b93-60050f6f3141"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.257898 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "62c71780-47e7-4e14-9b93-60050f6f3141" (UID: "62c71780-47e7-4e14-9b93-60050f6f3141"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.258373 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "62c71780-47e7-4e14-9b93-60050f6f3141" (UID: "62c71780-47e7-4e14-9b93-60050f6f3141"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.258531 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "62c71780-47e7-4e14-9b93-60050f6f3141" (UID: "62c71780-47e7-4e14-9b93-60050f6f3141"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.261616 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62c71780-47e7-4e14-9b93-60050f6f3141-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "62c71780-47e7-4e14-9b93-60050f6f3141" (UID: "62c71780-47e7-4e14-9b93-60050f6f3141"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.261923 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62c71780-47e7-4e14-9b93-60050f6f3141-kube-api-access-5dd5n" (OuterVolumeSpecName: "kube-api-access-5dd5n") pod "62c71780-47e7-4e14-9b93-60050f6f3141" (UID: "62c71780-47e7-4e14-9b93-60050f6f3141"). InnerVolumeSpecName "kube-api-access-5dd5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.273145 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "62c71780-47e7-4e14-9b93-60050f6f3141" (UID: "62c71780-47e7-4e14-9b93-60050f6f3141"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357037 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-var-lib-openvswitch\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357088 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-env-overrides\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357104 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fffs\" (UniqueName: \"kubernetes.io/projected/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-kube-api-access-6fffs\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357122 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-run-ovn\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357140 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-etc-openvswitch\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357158 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-host-kubelet\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357176 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-systemd-units\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357190 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-ovnkube-script-lib\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357208 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-host-cni-netd\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357224 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-host-run-ovn-kubernetes\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357241 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-host-run-netns\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357260 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357290 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-run-openvswitch\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357319 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-ovnkube-config\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357340 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-node-log\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357361 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-run-systemd\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357381 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-ovn-node-metrics-cert\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357401 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-host-slash\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357425 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-host-cni-bin\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357443 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-log-socket\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357478 4858 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357488 4858 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-slash\") on node \"crc\" DevicePath \"\"" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357513 4858 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357522 4858 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-node-log\") on node \"crc\" DevicePath \"\"" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357529 4858 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357538 4858 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357546 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dd5n\" (UniqueName: \"kubernetes.io/projected/62c71780-47e7-4e14-9b93-60050f6f3141-kube-api-access-5dd5n\") on node \"crc\" DevicePath \"\"" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357555 4858 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357565 4858 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357574 4858 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-log-socket\") on node \"crc\" DevicePath \"\"" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357582 4858 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/62c71780-47e7-4e14-9b93-60050f6f3141-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357592 4858 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357600 4858 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357608 4858 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357619 4858 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/62c71780-47e7-4e14-9b93-60050f6f3141-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357627 4858 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/62c71780-47e7-4e14-9b93-60050f6f3141-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357635 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/62c71780-47e7-4e14-9b93-60050f6f3141-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357638 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-host-run-ovn-kubernetes\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357676 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-host-run-netns\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357661 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-log-socket\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357702 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357708 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-var-lib-openvswitch\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.357722 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-run-openvswitch\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.358298 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-env-overrides\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.358304 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-ovnkube-config\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.358337 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-node-log\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.358358 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-run-systemd\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.358695 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-host-kubelet\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.358725 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-run-ovn\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.358746 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-etc-openvswitch\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.358903 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-host-cni-netd\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.358931 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-host-slash\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.358958 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-systemd-units\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.358996 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-host-cni-bin\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.359156 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-ovnkube-script-lib\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.361221 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-ovn-node-metrics-cert\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.377237 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fffs\" (UniqueName: \"kubernetes.io/projected/6ef1f934-1535-4a2b-a121-eb1c48ccbe4e-kube-api-access-6fffs\") pod \"ovnkube-node-8g2wf\" (UID: \"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e\") " pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: I0218 00:43:57.499454 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:43:57 crc kubenswrapper[4858]: W0218 00:43:57.521792 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ef1f934_1535_4a2b_a121_eb1c48ccbe4e.slice/crio-bd13ede15c1c2a8a915f3912b7217558d12f0edb4a3e09597a80119b29b0df86 WatchSource:0}: Error finding container bd13ede15c1c2a8a915f3912b7217558d12f0edb4a3e09597a80119b29b0df86: Status 404 returned error can't find the container with id bd13ede15c1c2a8a915f3912b7217558d12f0edb4a3e09597a80119b29b0df86 Feb 18 00:43:58 crc kubenswrapper[4858]: I0218 00:43:58.094166 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sr8bs_631d8e25-82dd-4462-b98d-f076e7264b67/kube-multus/2.log" Feb 18 00:43:58 crc kubenswrapper[4858]: I0218 00:43:58.100112 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jjq7k_62c71780-47e7-4e14-9b93-60050f6f3141/ovn-acl-logging/0.log" Feb 18 00:43:58 crc kubenswrapper[4858]: I0218 00:43:58.100905 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jjq7k_62c71780-47e7-4e14-9b93-60050f6f3141/ovn-controller/0.log" Feb 18 00:43:58 crc kubenswrapper[4858]: I0218 00:43:58.101439 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" event={"ID":"62c71780-47e7-4e14-9b93-60050f6f3141","Type":"ContainerDied","Data":"b089f5d406742cc184f82326fee6a53a24ed29bae92c39f55b92d9e792a0fc8c"} Feb 18 00:43:58 crc kubenswrapper[4858]: I0218 00:43:58.101523 4858 scope.go:117] "RemoveContainer" containerID="19e70fa0770c17c46684d5759f3196c3d8f2f2c334f3870ed602967094fb84e1" Feb 18 00:43:58 crc kubenswrapper[4858]: I0218 00:43:58.101630 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jjq7k" Feb 18 00:43:58 crc kubenswrapper[4858]: I0218 00:43:58.104517 4858 generic.go:334] "Generic (PLEG): container finished" podID="6ef1f934-1535-4a2b-a121-eb1c48ccbe4e" containerID="34e02a4c4d6b6e5f18613f851f1b71f22996bc2b83840101c05481713958337a" exitCode=0 Feb 18 00:43:58 crc kubenswrapper[4858]: I0218 00:43:58.104555 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" event={"ID":"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e","Type":"ContainerDied","Data":"34e02a4c4d6b6e5f18613f851f1b71f22996bc2b83840101c05481713958337a"} Feb 18 00:43:58 crc kubenswrapper[4858]: I0218 00:43:58.104582 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" event={"ID":"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e","Type":"ContainerStarted","Data":"bd13ede15c1c2a8a915f3912b7217558d12f0edb4a3e09597a80119b29b0df86"} Feb 18 00:43:58 crc kubenswrapper[4858]: I0218 00:43:58.136135 4858 scope.go:117] "RemoveContainer" containerID="4ea720092f273fe030ae2dabbd571779636d4ccbe08ae2c531b1b2f562b3076c" Feb 18 00:43:58 crc kubenswrapper[4858]: I0218 00:43:58.162869 4858 scope.go:117] "RemoveContainer" containerID="fede95e49f8c6a4f7a54751d5ab70ed457daf1bdad115ca70960d270cc4abba9" Feb 18 00:43:58 crc kubenswrapper[4858]: I0218 00:43:58.205026 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jjq7k"] Feb 18 00:43:58 crc kubenswrapper[4858]: I0218 00:43:58.206292 4858 scope.go:117] "RemoveContainer" containerID="06144a545e387dfaa0342138c286d9f65ee0efc087f35e694375ed933913815e" Feb 18 00:43:58 crc kubenswrapper[4858]: I0218 00:43:58.208183 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jjq7k"] Feb 18 00:43:58 crc kubenswrapper[4858]: I0218 00:43:58.228428 4858 scope.go:117] "RemoveContainer" containerID="e2ba396863a12a1771702e79b9f299c198cf1d6013eabbef64513efdc1b22674" Feb 18 00:43:58 crc kubenswrapper[4858]: I0218 00:43:58.254489 4858 scope.go:117] "RemoveContainer" containerID="36fb3a7c6d0b60cbb0947bbe22513afe57b1eaec484eb386795427b35695c3c4" Feb 18 00:43:58 crc kubenswrapper[4858]: I0218 00:43:58.280700 4858 scope.go:117] "RemoveContainer" containerID="c8ba8b072ce304b1ab1bfae0d6594316bd12669b1f94071a8ac4d782ddd0d990" Feb 18 00:43:58 crc kubenswrapper[4858]: I0218 00:43:58.302998 4858 scope.go:117] "RemoveContainer" containerID="3a29b338b4007373f3196c8138b4b9eea804cd8bad00045ee5cb21674348e77c" Feb 18 00:43:58 crc kubenswrapper[4858]: I0218 00:43:58.330999 4858 scope.go:117] "RemoveContainer" containerID="bc6cd264fd9173d1ffa3e4a4b660098e4404cac674153fc82e6d79ded89161dd" Feb 18 00:43:59 crc kubenswrapper[4858]: I0218 00:43:59.113022 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" event={"ID":"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e","Type":"ContainerStarted","Data":"1f3fc0dc0a7817e4e823c66ff57d57d237e0d618f3feef26df84ee3f68f6a0e1"} Feb 18 00:43:59 crc kubenswrapper[4858]: I0218 00:43:59.113972 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" event={"ID":"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e","Type":"ContainerStarted","Data":"e7295c9aa2cdb32eda00e9d4ef9ac4704baa80672fc5fe495d379ee2ad421b03"} Feb 18 00:43:59 crc kubenswrapper[4858]: I0218 00:43:59.114038 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" event={"ID":"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e","Type":"ContainerStarted","Data":"0d4fcad01d4941554d1606df90c83d70a81e4d902f97118f75984878650b6d9e"} Feb 18 00:43:59 crc kubenswrapper[4858]: I0218 00:43:59.114119 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" event={"ID":"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e","Type":"ContainerStarted","Data":"5fb591befc0cdd769420d22051068f78b329be345b648441640fed6b648c5228"} Feb 18 00:43:59 crc kubenswrapper[4858]: I0218 00:43:59.114171 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" event={"ID":"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e","Type":"ContainerStarted","Data":"ff5128c75ac2a59204e881ddfe9c0317a11696543e4eacf929edd1be796f6b4c"} Feb 18 00:43:59 crc kubenswrapper[4858]: I0218 00:43:59.114222 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" event={"ID":"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e","Type":"ContainerStarted","Data":"c59f4fb44d617efcdb31c0daff33d1a53fbadfebadbbc5722f8328c572d7121c"} Feb 18 00:43:59 crc kubenswrapper[4858]: I0218 00:43:59.426058 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62c71780-47e7-4e14-9b93-60050f6f3141" path="/var/lib/kubelet/pods/62c71780-47e7-4e14-9b93-60050f6f3141/volumes" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.132532 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" event={"ID":"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e","Type":"ContainerStarted","Data":"2e66a7d5f7672c24d5c02fad064a147e5f11070105de10fdffbf4ba3d41113ba"} Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.649565 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-kmhxx"] Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.650210 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kmhxx" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.652112 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-2mfxs" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.652368 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.654709 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.723314 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm8q4\" (UniqueName: \"kubernetes.io/projected/560a0ca4-78ca-406c-a540-51483acdb0f8-kube-api-access-tm8q4\") pod \"obo-prometheus-operator-68bc856cb9-kmhxx\" (UID: \"560a0ca4-78ca-406c-a540-51483acdb0f8\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kmhxx" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.790825 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs"] Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.791637 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.793559 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-2g6p8" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.794288 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.816611 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c"] Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.817376 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.824643 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d28ad27c-eed0-473d-9257-1ea8f6c7291c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs\" (UID: \"d28ad27c-eed0-473d-9257-1ea8f6c7291c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.824689 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5d6270c6-d227-4243-b495-19306dfa376c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c\" (UID: \"5d6270c6-d227-4243-b495-19306dfa376c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.824765 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5d6270c6-d227-4243-b495-19306dfa376c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c\" (UID: \"5d6270c6-d227-4243-b495-19306dfa376c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.824817 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm8q4\" (UniqueName: \"kubernetes.io/projected/560a0ca4-78ca-406c-a540-51483acdb0f8-kube-api-access-tm8q4\") pod \"obo-prometheus-operator-68bc856cb9-kmhxx\" (UID: \"560a0ca4-78ca-406c-a540-51483acdb0f8\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kmhxx" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.824848 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d28ad27c-eed0-473d-9257-1ea8f6c7291c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs\" (UID: \"d28ad27c-eed0-473d-9257-1ea8f6c7291c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.856711 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm8q4\" (UniqueName: \"kubernetes.io/projected/560a0ca4-78ca-406c-a540-51483acdb0f8-kube-api-access-tm8q4\") pod \"obo-prometheus-operator-68bc856cb9-kmhxx\" (UID: \"560a0ca4-78ca-406c-a540-51483acdb0f8\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kmhxx" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.925717 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5d6270c6-d227-4243-b495-19306dfa376c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c\" (UID: \"5d6270c6-d227-4243-b495-19306dfa376c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.925937 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d28ad27c-eed0-473d-9257-1ea8f6c7291c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs\" (UID: \"d28ad27c-eed0-473d-9257-1ea8f6c7291c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.925991 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d28ad27c-eed0-473d-9257-1ea8f6c7291c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs\" (UID: \"d28ad27c-eed0-473d-9257-1ea8f6c7291c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.926026 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5d6270c6-d227-4243-b495-19306dfa376c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c\" (UID: \"5d6270c6-d227-4243-b495-19306dfa376c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.929136 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d28ad27c-eed0-473d-9257-1ea8f6c7291c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs\" (UID: \"d28ad27c-eed0-473d-9257-1ea8f6c7291c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.930253 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5d6270c6-d227-4243-b495-19306dfa376c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c\" (UID: \"5d6270c6-d227-4243-b495-19306dfa376c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.930415 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d28ad27c-eed0-473d-9257-1ea8f6c7291c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs\" (UID: \"d28ad27c-eed0-473d-9257-1ea8f6c7291c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.931698 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5d6270c6-d227-4243-b495-19306dfa376c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c\" (UID: \"5d6270c6-d227-4243-b495-19306dfa376c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.965160 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kmhxx" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.972048 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-rfgvn"] Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.972846 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.974711 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-7bfds" Feb 18 00:44:02 crc kubenswrapper[4858]: I0218 00:44:02.975306 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 18 00:44:02 crc kubenswrapper[4858]: E0218 00:44:02.998137 4858 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-kmhxx_openshift-operators_560a0ca4-78ca-406c-a540-51483acdb0f8_0(9b1e6f30cad715c992f0e69cfa3d7fe5f593cb06dd54cc0c3eb27421988c4e3b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:44:02 crc kubenswrapper[4858]: E0218 00:44:02.998235 4858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-kmhxx_openshift-operators_560a0ca4-78ca-406c-a540-51483acdb0f8_0(9b1e6f30cad715c992f0e69cfa3d7fe5f593cb06dd54cc0c3eb27421988c4e3b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kmhxx" Feb 18 00:44:02 crc kubenswrapper[4858]: E0218 00:44:02.998275 4858 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-kmhxx_openshift-operators_560a0ca4-78ca-406c-a540-51483acdb0f8_0(9b1e6f30cad715c992f0e69cfa3d7fe5f593cb06dd54cc0c3eb27421988c4e3b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kmhxx" Feb 18 00:44:02 crc kubenswrapper[4858]: E0218 00:44:02.998358 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-kmhxx_openshift-operators(560a0ca4-78ca-406c-a540-51483acdb0f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-kmhxx_openshift-operators(560a0ca4-78ca-406c-a540-51483acdb0f8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-kmhxx_openshift-operators_560a0ca4-78ca-406c-a540-51483acdb0f8_0(9b1e6f30cad715c992f0e69cfa3d7fe5f593cb06dd54cc0c3eb27421988c4e3b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kmhxx" podUID="560a0ca4-78ca-406c-a540-51483acdb0f8" Feb 18 00:44:03 crc kubenswrapper[4858]: I0218 00:44:03.027229 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj9cs\" (UniqueName: \"kubernetes.io/projected/4752855a-6a66-4ba8-a484-00326c32d431-kube-api-access-bj9cs\") pod \"observability-operator-59bdc8b94-rfgvn\" (UID: \"4752855a-6a66-4ba8-a484-00326c32d431\") " pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" Feb 18 00:44:03 crc kubenswrapper[4858]: I0218 00:44:03.027295 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/4752855a-6a66-4ba8-a484-00326c32d431-observability-operator-tls\") pod \"observability-operator-59bdc8b94-rfgvn\" (UID: \"4752855a-6a66-4ba8-a484-00326c32d431\") " pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" Feb 18 00:44:03 crc kubenswrapper[4858]: I0218 00:44:03.071135 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-xmkpw"] Feb 18 00:44:03 crc kubenswrapper[4858]: I0218 00:44:03.071935 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" Feb 18 00:44:03 crc kubenswrapper[4858]: I0218 00:44:03.074215 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-2hzhz" Feb 18 00:44:03 crc kubenswrapper[4858]: I0218 00:44:03.104535 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs" Feb 18 00:44:03 crc kubenswrapper[4858]: I0218 00:44:03.127979 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj9cs\" (UniqueName: \"kubernetes.io/projected/4752855a-6a66-4ba8-a484-00326c32d431-kube-api-access-bj9cs\") pod \"observability-operator-59bdc8b94-rfgvn\" (UID: \"4752855a-6a66-4ba8-a484-00326c32d431\") " pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" Feb 18 00:44:03 crc kubenswrapper[4858]: I0218 00:44:03.128034 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48zq6\" (UniqueName: \"kubernetes.io/projected/5d03f9d0-b687-4d66-9f89-297155cf2d51-kube-api-access-48zq6\") pod \"perses-operator-5bf474d74f-xmkpw\" (UID: \"5d03f9d0-b687-4d66-9f89-297155cf2d51\") " pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" Feb 18 00:44:03 crc kubenswrapper[4858]: I0218 00:44:03.128098 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/4752855a-6a66-4ba8-a484-00326c32d431-observability-operator-tls\") pod \"observability-operator-59bdc8b94-rfgvn\" (UID: \"4752855a-6a66-4ba8-a484-00326c32d431\") " pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" Feb 18 00:44:03 crc kubenswrapper[4858]: I0218 00:44:03.128130 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5d03f9d0-b687-4d66-9f89-297155cf2d51-openshift-service-ca\") pod \"perses-operator-5bf474d74f-xmkpw\" (UID: \"5d03f9d0-b687-4d66-9f89-297155cf2d51\") " pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" Feb 18 00:44:03 crc kubenswrapper[4858]: E0218 00:44:03.128537 4858 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs_openshift-operators_d28ad27c-eed0-473d-9257-1ea8f6c7291c_0(1017fea33911ecd8ccc7394f712ac66f7c702113c3e9ce7f7d3dfa12b868079a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:44:03 crc kubenswrapper[4858]: E0218 00:44:03.128661 4858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs_openshift-operators_d28ad27c-eed0-473d-9257-1ea8f6c7291c_0(1017fea33911ecd8ccc7394f712ac66f7c702113c3e9ce7f7d3dfa12b868079a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs" Feb 18 00:44:03 crc kubenswrapper[4858]: E0218 00:44:03.128747 4858 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs_openshift-operators_d28ad27c-eed0-473d-9257-1ea8f6c7291c_0(1017fea33911ecd8ccc7394f712ac66f7c702113c3e9ce7f7d3dfa12b868079a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs" Feb 18 00:44:03 crc kubenswrapper[4858]: E0218 00:44:03.128891 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs_openshift-operators(d28ad27c-eed0-473d-9257-1ea8f6c7291c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs_openshift-operators(d28ad27c-eed0-473d-9257-1ea8f6c7291c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs_openshift-operators_d28ad27c-eed0-473d-9257-1ea8f6c7291c_0(1017fea33911ecd8ccc7394f712ac66f7c702113c3e9ce7f7d3dfa12b868079a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs" podUID="d28ad27c-eed0-473d-9257-1ea8f6c7291c" Feb 18 00:44:03 crc kubenswrapper[4858]: I0218 00:44:03.131285 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c" Feb 18 00:44:03 crc kubenswrapper[4858]: I0218 00:44:03.132537 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/4752855a-6a66-4ba8-a484-00326c32d431-observability-operator-tls\") pod \"observability-operator-59bdc8b94-rfgvn\" (UID: \"4752855a-6a66-4ba8-a484-00326c32d431\") " pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" Feb 18 00:44:03 crc kubenswrapper[4858]: I0218 00:44:03.145173 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj9cs\" (UniqueName: \"kubernetes.io/projected/4752855a-6a66-4ba8-a484-00326c32d431-kube-api-access-bj9cs\") pod \"observability-operator-59bdc8b94-rfgvn\" (UID: \"4752855a-6a66-4ba8-a484-00326c32d431\") " pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" Feb 18 00:44:03 crc kubenswrapper[4858]: E0218 00:44:03.155467 4858 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c_openshift-operators_5d6270c6-d227-4243-b495-19306dfa376c_0(1b0baccef4f6d90befd30bcbd38cec7e49674637c7fd3c18ebd81056880ec2aa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:44:03 crc kubenswrapper[4858]: E0218 00:44:03.155571 4858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c_openshift-operators_5d6270c6-d227-4243-b495-19306dfa376c_0(1b0baccef4f6d90befd30bcbd38cec7e49674637c7fd3c18ebd81056880ec2aa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c" Feb 18 00:44:03 crc kubenswrapper[4858]: E0218 00:44:03.155600 4858 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c_openshift-operators_5d6270c6-d227-4243-b495-19306dfa376c_0(1b0baccef4f6d90befd30bcbd38cec7e49674637c7fd3c18ebd81056880ec2aa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c" Feb 18 00:44:03 crc kubenswrapper[4858]: E0218 00:44:03.155667 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c_openshift-operators(5d6270c6-d227-4243-b495-19306dfa376c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c_openshift-operators(5d6270c6-d227-4243-b495-19306dfa376c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c_openshift-operators_5d6270c6-d227-4243-b495-19306dfa376c_0(1b0baccef4f6d90befd30bcbd38cec7e49674637c7fd3c18ebd81056880ec2aa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c" podUID="5d6270c6-d227-4243-b495-19306dfa376c" Feb 18 00:44:03 crc kubenswrapper[4858]: I0218 00:44:03.228440 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48zq6\" (UniqueName: \"kubernetes.io/projected/5d03f9d0-b687-4d66-9f89-297155cf2d51-kube-api-access-48zq6\") pod \"perses-operator-5bf474d74f-xmkpw\" (UID: \"5d03f9d0-b687-4d66-9f89-297155cf2d51\") " pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" Feb 18 00:44:03 crc kubenswrapper[4858]: I0218 00:44:03.228515 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5d03f9d0-b687-4d66-9f89-297155cf2d51-openshift-service-ca\") pod \"perses-operator-5bf474d74f-xmkpw\" (UID: \"5d03f9d0-b687-4d66-9f89-297155cf2d51\") " pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" Feb 18 00:44:03 crc kubenswrapper[4858]: I0218 00:44:03.229267 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5d03f9d0-b687-4d66-9f89-297155cf2d51-openshift-service-ca\") pod \"perses-operator-5bf474d74f-xmkpw\" (UID: \"5d03f9d0-b687-4d66-9f89-297155cf2d51\") " pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" Feb 18 00:44:03 crc kubenswrapper[4858]: I0218 00:44:03.250927 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48zq6\" (UniqueName: \"kubernetes.io/projected/5d03f9d0-b687-4d66-9f89-297155cf2d51-kube-api-access-48zq6\") pod \"perses-operator-5bf474d74f-xmkpw\" (UID: \"5d03f9d0-b687-4d66-9f89-297155cf2d51\") " pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" Feb 18 00:44:03 crc kubenswrapper[4858]: I0218 00:44:03.332459 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" Feb 18 00:44:03 crc kubenswrapper[4858]: E0218 00:44:03.353162 4858 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-rfgvn_openshift-operators_4752855a-6a66-4ba8-a484-00326c32d431_0(1c7d1c52848652ae04fb0b48933c446eb9ae48bab40de0fbd60bc29d7317c063): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:44:03 crc kubenswrapper[4858]: E0218 00:44:03.353218 4858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-rfgvn_openshift-operators_4752855a-6a66-4ba8-a484-00326c32d431_0(1c7d1c52848652ae04fb0b48933c446eb9ae48bab40de0fbd60bc29d7317c063): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" Feb 18 00:44:03 crc kubenswrapper[4858]: E0218 00:44:03.353242 4858 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-rfgvn_openshift-operators_4752855a-6a66-4ba8-a484-00326c32d431_0(1c7d1c52848652ae04fb0b48933c446eb9ae48bab40de0fbd60bc29d7317c063): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" Feb 18 00:44:03 crc kubenswrapper[4858]: E0218 00:44:03.353289 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-rfgvn_openshift-operators(4752855a-6a66-4ba8-a484-00326c32d431)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-rfgvn_openshift-operators(4752855a-6a66-4ba8-a484-00326c32d431)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-rfgvn_openshift-operators_4752855a-6a66-4ba8-a484-00326c32d431_0(1c7d1c52848652ae04fb0b48933c446eb9ae48bab40de0fbd60bc29d7317c063): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" podUID="4752855a-6a66-4ba8-a484-00326c32d431" Feb 18 00:44:03 crc kubenswrapper[4858]: I0218 00:44:03.388085 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" Feb 18 00:44:03 crc kubenswrapper[4858]: E0218 00:44:03.409989 4858 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-xmkpw_openshift-operators_5d03f9d0-b687-4d66-9f89-297155cf2d51_0(f47ad6d7dddc1257d9068d92764a669c94f214fdcd54edb8cf2feb79383db0dc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:44:03 crc kubenswrapper[4858]: E0218 00:44:03.410049 4858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-xmkpw_openshift-operators_5d03f9d0-b687-4d66-9f89-297155cf2d51_0(f47ad6d7dddc1257d9068d92764a669c94f214fdcd54edb8cf2feb79383db0dc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" Feb 18 00:44:03 crc kubenswrapper[4858]: E0218 00:44:03.410076 4858 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-xmkpw_openshift-operators_5d03f9d0-b687-4d66-9f89-297155cf2d51_0(f47ad6d7dddc1257d9068d92764a669c94f214fdcd54edb8cf2feb79383db0dc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" Feb 18 00:44:03 crc kubenswrapper[4858]: E0218 00:44:03.410124 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-xmkpw_openshift-operators(5d03f9d0-b687-4d66-9f89-297155cf2d51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-xmkpw_openshift-operators(5d03f9d0-b687-4d66-9f89-297155cf2d51)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-xmkpw_openshift-operators_5d03f9d0-b687-4d66-9f89-297155cf2d51_0(f47ad6d7dddc1257d9068d92764a669c94f214fdcd54edb8cf2feb79383db0dc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" podUID="5d03f9d0-b687-4d66-9f89-297155cf2d51" Feb 18 00:44:05 crc kubenswrapper[4858]: I0218 00:44:05.366401 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" event={"ID":"6ef1f934-1535-4a2b-a121-eb1c48ccbe4e","Type":"ContainerStarted","Data":"7a057017e5eae091f36caab02d2a33fdfda2b653f54bcb134d2149309dfeb6f7"} Feb 18 00:44:05 crc kubenswrapper[4858]: I0218 00:44:05.367016 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:44:05 crc kubenswrapper[4858]: I0218 00:44:05.399645 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:44:05 crc kubenswrapper[4858]: I0218 00:44:05.403634 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" podStartSLOduration=8.403607021 podStartE2EDuration="8.403607021s" podCreationTimestamp="2026-02-18 00:43:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:44:05.398807173 +0000 UTC m=+598.704643915" watchObservedRunningTime="2026-02-18 00:44:05.403607021 +0000 UTC m=+598.709443763" Feb 18 00:44:05 crc kubenswrapper[4858]: I0218 00:44:05.461220 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-xmkpw"] Feb 18 00:44:05 crc kubenswrapper[4858]: I0218 00:44:05.461351 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" Feb 18 00:44:05 crc kubenswrapper[4858]: I0218 00:44:05.461807 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" Feb 18 00:44:05 crc kubenswrapper[4858]: I0218 00:44:05.476020 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-kmhxx"] Feb 18 00:44:05 crc kubenswrapper[4858]: I0218 00:44:05.476211 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kmhxx" Feb 18 00:44:05 crc kubenswrapper[4858]: I0218 00:44:05.476718 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kmhxx" Feb 18 00:44:05 crc kubenswrapper[4858]: I0218 00:44:05.482595 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-rfgvn"] Feb 18 00:44:05 crc kubenswrapper[4858]: I0218 00:44:05.482671 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c"] Feb 18 00:44:05 crc kubenswrapper[4858]: I0218 00:44:05.482774 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c" Feb 18 00:44:05 crc kubenswrapper[4858]: I0218 00:44:05.483224 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c" Feb 18 00:44:05 crc kubenswrapper[4858]: I0218 00:44:05.483544 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" Feb 18 00:44:05 crc kubenswrapper[4858]: I0218 00:44:05.483800 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" Feb 18 00:44:05 crc kubenswrapper[4858]: I0218 00:44:05.485603 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs"] Feb 18 00:44:05 crc kubenswrapper[4858]: I0218 00:44:05.485717 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs" Feb 18 00:44:05 crc kubenswrapper[4858]: I0218 00:44:05.486136 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs" Feb 18 00:44:05 crc kubenswrapper[4858]: E0218 00:44:05.548641 4858 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-xmkpw_openshift-operators_5d03f9d0-b687-4d66-9f89-297155cf2d51_0(3b16f188dbcabd1aa374ec69db4294871c1e6a21e88a88a338005f5fdee62a0f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:44:05 crc kubenswrapper[4858]: E0218 00:44:05.548716 4858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-xmkpw_openshift-operators_5d03f9d0-b687-4d66-9f89-297155cf2d51_0(3b16f188dbcabd1aa374ec69db4294871c1e6a21e88a88a338005f5fdee62a0f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" Feb 18 00:44:05 crc kubenswrapper[4858]: E0218 00:44:05.548737 4858 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-xmkpw_openshift-operators_5d03f9d0-b687-4d66-9f89-297155cf2d51_0(3b16f188dbcabd1aa374ec69db4294871c1e6a21e88a88a338005f5fdee62a0f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" Feb 18 00:44:05 crc kubenswrapper[4858]: E0218 00:44:05.548778 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-xmkpw_openshift-operators(5d03f9d0-b687-4d66-9f89-297155cf2d51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-xmkpw_openshift-operators(5d03f9d0-b687-4d66-9f89-297155cf2d51)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-xmkpw_openshift-operators_5d03f9d0-b687-4d66-9f89-297155cf2d51_0(3b16f188dbcabd1aa374ec69db4294871c1e6a21e88a88a338005f5fdee62a0f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" podUID="5d03f9d0-b687-4d66-9f89-297155cf2d51" Feb 18 00:44:05 crc kubenswrapper[4858]: E0218 00:44:05.593721 4858 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-kmhxx_openshift-operators_560a0ca4-78ca-406c-a540-51483acdb0f8_0(b4e4803709f48fdf89d217165cbe89d120b494bfd5394f1cc409b0bf62bed81d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:44:05 crc kubenswrapper[4858]: E0218 00:44:05.593793 4858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-kmhxx_openshift-operators_560a0ca4-78ca-406c-a540-51483acdb0f8_0(b4e4803709f48fdf89d217165cbe89d120b494bfd5394f1cc409b0bf62bed81d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kmhxx" Feb 18 00:44:05 crc kubenswrapper[4858]: E0218 00:44:05.593823 4858 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-kmhxx_openshift-operators_560a0ca4-78ca-406c-a540-51483acdb0f8_0(b4e4803709f48fdf89d217165cbe89d120b494bfd5394f1cc409b0bf62bed81d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kmhxx" Feb 18 00:44:05 crc kubenswrapper[4858]: E0218 00:44:05.593871 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-kmhxx_openshift-operators(560a0ca4-78ca-406c-a540-51483acdb0f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-kmhxx_openshift-operators(560a0ca4-78ca-406c-a540-51483acdb0f8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-kmhxx_openshift-operators_560a0ca4-78ca-406c-a540-51483acdb0f8_0(b4e4803709f48fdf89d217165cbe89d120b494bfd5394f1cc409b0bf62bed81d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kmhxx" podUID="560a0ca4-78ca-406c-a540-51483acdb0f8" Feb 18 00:44:05 crc kubenswrapper[4858]: E0218 00:44:05.597686 4858 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs_openshift-operators_d28ad27c-eed0-473d-9257-1ea8f6c7291c_0(a5e6fafb7212bfd696949fda5e28c5af014dfdbefb7b84efd9682a5383301a5e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:44:05 crc kubenswrapper[4858]: E0218 00:44:05.597733 4858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs_openshift-operators_d28ad27c-eed0-473d-9257-1ea8f6c7291c_0(a5e6fafb7212bfd696949fda5e28c5af014dfdbefb7b84efd9682a5383301a5e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs" Feb 18 00:44:05 crc kubenswrapper[4858]: E0218 00:44:05.597758 4858 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs_openshift-operators_d28ad27c-eed0-473d-9257-1ea8f6c7291c_0(a5e6fafb7212bfd696949fda5e28c5af014dfdbefb7b84efd9682a5383301a5e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs" Feb 18 00:44:05 crc kubenswrapper[4858]: E0218 00:44:05.597796 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs_openshift-operators(d28ad27c-eed0-473d-9257-1ea8f6c7291c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs_openshift-operators(d28ad27c-eed0-473d-9257-1ea8f6c7291c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs_openshift-operators_d28ad27c-eed0-473d-9257-1ea8f6c7291c_0(a5e6fafb7212bfd696949fda5e28c5af014dfdbefb7b84efd9682a5383301a5e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs" podUID="d28ad27c-eed0-473d-9257-1ea8f6c7291c" Feb 18 00:44:05 crc kubenswrapper[4858]: E0218 00:44:05.601020 4858 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c_openshift-operators_5d6270c6-d227-4243-b495-19306dfa376c_0(f042abb3115d5acb58ab6699a40d0c9bb4001a1304d9f561461285d2b5e29949): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:44:05 crc kubenswrapper[4858]: E0218 00:44:05.601050 4858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c_openshift-operators_5d6270c6-d227-4243-b495-19306dfa376c_0(f042abb3115d5acb58ab6699a40d0c9bb4001a1304d9f561461285d2b5e29949): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c" Feb 18 00:44:05 crc kubenswrapper[4858]: E0218 00:44:05.601068 4858 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c_openshift-operators_5d6270c6-d227-4243-b495-19306dfa376c_0(f042abb3115d5acb58ab6699a40d0c9bb4001a1304d9f561461285d2b5e29949): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c" Feb 18 00:44:05 crc kubenswrapper[4858]: E0218 00:44:05.601095 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c_openshift-operators(5d6270c6-d227-4243-b495-19306dfa376c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c_openshift-operators(5d6270c6-d227-4243-b495-19306dfa376c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c_openshift-operators_5d6270c6-d227-4243-b495-19306dfa376c_0(f042abb3115d5acb58ab6699a40d0c9bb4001a1304d9f561461285d2b5e29949): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c" podUID="5d6270c6-d227-4243-b495-19306dfa376c" Feb 18 00:44:05 crc kubenswrapper[4858]: E0218 00:44:05.606476 4858 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-rfgvn_openshift-operators_4752855a-6a66-4ba8-a484-00326c32d431_0(b9adec22654b4a98dc53f54c9517311fb029c4f55d52d1b6b1809ad24632295a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:44:05 crc kubenswrapper[4858]: E0218 00:44:05.606543 4858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-rfgvn_openshift-operators_4752855a-6a66-4ba8-a484-00326c32d431_0(b9adec22654b4a98dc53f54c9517311fb029c4f55d52d1b6b1809ad24632295a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" Feb 18 00:44:05 crc kubenswrapper[4858]: E0218 00:44:05.606560 4858 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-rfgvn_openshift-operators_4752855a-6a66-4ba8-a484-00326c32d431_0(b9adec22654b4a98dc53f54c9517311fb029c4f55d52d1b6b1809ad24632295a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" Feb 18 00:44:05 crc kubenswrapper[4858]: E0218 00:44:05.606600 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-rfgvn_openshift-operators(4752855a-6a66-4ba8-a484-00326c32d431)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-rfgvn_openshift-operators(4752855a-6a66-4ba8-a484-00326c32d431)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-rfgvn_openshift-operators_4752855a-6a66-4ba8-a484-00326c32d431_0(b9adec22654b4a98dc53f54c9517311fb029c4f55d52d1b6b1809ad24632295a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" podUID="4752855a-6a66-4ba8-a484-00326c32d431" Feb 18 00:44:06 crc kubenswrapper[4858]: I0218 00:44:06.370952 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:44:06 crc kubenswrapper[4858]: I0218 00:44:06.371352 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:44:06 crc kubenswrapper[4858]: I0218 00:44:06.403236 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:44:08 crc kubenswrapper[4858]: I0218 00:44:08.418873 4858 scope.go:117] "RemoveContainer" containerID="6df787764e784c8ac5e384a5692545df23cffb81f4ccfb4027bcea91c5242b7e" Feb 18 00:44:08 crc kubenswrapper[4858]: E0218 00:44:08.419238 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-sr8bs_openshift-multus(631d8e25-82dd-4462-b98d-f076e7264b67)\"" pod="openshift-multus/multus-sr8bs" podUID="631d8e25-82dd-4462-b98d-f076e7264b67" Feb 18 00:44:17 crc kubenswrapper[4858]: I0218 00:44:17.419387 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c" Feb 18 00:44:17 crc kubenswrapper[4858]: I0218 00:44:17.419387 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs" Feb 18 00:44:17 crc kubenswrapper[4858]: I0218 00:44:17.423857 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs" Feb 18 00:44:17 crc kubenswrapper[4858]: I0218 00:44:17.425312 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c" Feb 18 00:44:17 crc kubenswrapper[4858]: E0218 00:44:17.464259 4858 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c_openshift-operators_5d6270c6-d227-4243-b495-19306dfa376c_0(248a75c95de88ad38d647b6140ef436c6e63925fdaca8e0d83730895dde4377e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:44:17 crc kubenswrapper[4858]: E0218 00:44:17.464331 4858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c_openshift-operators_5d6270c6-d227-4243-b495-19306dfa376c_0(248a75c95de88ad38d647b6140ef436c6e63925fdaca8e0d83730895dde4377e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c" Feb 18 00:44:17 crc kubenswrapper[4858]: E0218 00:44:17.464357 4858 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c_openshift-operators_5d6270c6-d227-4243-b495-19306dfa376c_0(248a75c95de88ad38d647b6140ef436c6e63925fdaca8e0d83730895dde4377e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c" Feb 18 00:44:17 crc kubenswrapper[4858]: E0218 00:44:17.464409 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c_openshift-operators(5d6270c6-d227-4243-b495-19306dfa376c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c_openshift-operators(5d6270c6-d227-4243-b495-19306dfa376c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c_openshift-operators_5d6270c6-d227-4243-b495-19306dfa376c_0(248a75c95de88ad38d647b6140ef436c6e63925fdaca8e0d83730895dde4377e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c" podUID="5d6270c6-d227-4243-b495-19306dfa376c" Feb 18 00:44:17 crc kubenswrapper[4858]: E0218 00:44:17.468438 4858 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs_openshift-operators_d28ad27c-eed0-473d-9257-1ea8f6c7291c_0(861b7743ce5e8894627da610469d128eedb0eabb5f36a37ca6dfe42be45abea6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:44:17 crc kubenswrapper[4858]: E0218 00:44:17.468535 4858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs_openshift-operators_d28ad27c-eed0-473d-9257-1ea8f6c7291c_0(861b7743ce5e8894627da610469d128eedb0eabb5f36a37ca6dfe42be45abea6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs" Feb 18 00:44:17 crc kubenswrapper[4858]: E0218 00:44:17.468582 4858 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs_openshift-operators_d28ad27c-eed0-473d-9257-1ea8f6c7291c_0(861b7743ce5e8894627da610469d128eedb0eabb5f36a37ca6dfe42be45abea6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs" Feb 18 00:44:17 crc kubenswrapper[4858]: E0218 00:44:17.468653 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs_openshift-operators(d28ad27c-eed0-473d-9257-1ea8f6c7291c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs_openshift-operators(d28ad27c-eed0-473d-9257-1ea8f6c7291c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs_openshift-operators_d28ad27c-eed0-473d-9257-1ea8f6c7291c_0(861b7743ce5e8894627da610469d128eedb0eabb5f36a37ca6dfe42be45abea6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs" podUID="d28ad27c-eed0-473d-9257-1ea8f6c7291c" Feb 18 00:44:18 crc kubenswrapper[4858]: I0218 00:44:18.418437 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" Feb 18 00:44:18 crc kubenswrapper[4858]: I0218 00:44:18.418890 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" Feb 18 00:44:18 crc kubenswrapper[4858]: E0218 00:44:18.437910 4858 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-xmkpw_openshift-operators_5d03f9d0-b687-4d66-9f89-297155cf2d51_0(4497a08ccb7d79451ef08585be87170372da673aca19d00a3395bd9824ddab19): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:44:18 crc kubenswrapper[4858]: E0218 00:44:18.437973 4858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-xmkpw_openshift-operators_5d03f9d0-b687-4d66-9f89-297155cf2d51_0(4497a08ccb7d79451ef08585be87170372da673aca19d00a3395bd9824ddab19): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" Feb 18 00:44:18 crc kubenswrapper[4858]: E0218 00:44:18.437994 4858 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-xmkpw_openshift-operators_5d03f9d0-b687-4d66-9f89-297155cf2d51_0(4497a08ccb7d79451ef08585be87170372da673aca19d00a3395bd9824ddab19): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" Feb 18 00:44:18 crc kubenswrapper[4858]: E0218 00:44:18.438047 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-xmkpw_openshift-operators(5d03f9d0-b687-4d66-9f89-297155cf2d51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-xmkpw_openshift-operators(5d03f9d0-b687-4d66-9f89-297155cf2d51)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-xmkpw_openshift-operators_5d03f9d0-b687-4d66-9f89-297155cf2d51_0(4497a08ccb7d79451ef08585be87170372da673aca19d00a3395bd9824ddab19): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" podUID="5d03f9d0-b687-4d66-9f89-297155cf2d51" Feb 18 00:44:19 crc kubenswrapper[4858]: I0218 00:44:19.418484 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" Feb 18 00:44:19 crc kubenswrapper[4858]: I0218 00:44:19.418947 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" Feb 18 00:44:19 crc kubenswrapper[4858]: E0218 00:44:19.441004 4858 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-rfgvn_openshift-operators_4752855a-6a66-4ba8-a484-00326c32d431_0(43d6590519a2046895415c35bf2b7c8b511061154a5f14a963ba9a547d39fc66): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:44:19 crc kubenswrapper[4858]: E0218 00:44:19.441248 4858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-rfgvn_openshift-operators_4752855a-6a66-4ba8-a484-00326c32d431_0(43d6590519a2046895415c35bf2b7c8b511061154a5f14a963ba9a547d39fc66): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" Feb 18 00:44:19 crc kubenswrapper[4858]: E0218 00:44:19.441268 4858 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-rfgvn_openshift-operators_4752855a-6a66-4ba8-a484-00326c32d431_0(43d6590519a2046895415c35bf2b7c8b511061154a5f14a963ba9a547d39fc66): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" Feb 18 00:44:19 crc kubenswrapper[4858]: E0218 00:44:19.441309 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-rfgvn_openshift-operators(4752855a-6a66-4ba8-a484-00326c32d431)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-rfgvn_openshift-operators(4752855a-6a66-4ba8-a484-00326c32d431)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-rfgvn_openshift-operators_4752855a-6a66-4ba8-a484-00326c32d431_0(43d6590519a2046895415c35bf2b7c8b511061154a5f14a963ba9a547d39fc66): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" podUID="4752855a-6a66-4ba8-a484-00326c32d431" Feb 18 00:44:20 crc kubenswrapper[4858]: I0218 00:44:20.419570 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kmhxx" Feb 18 00:44:20 crc kubenswrapper[4858]: I0218 00:44:20.420420 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kmhxx" Feb 18 00:44:20 crc kubenswrapper[4858]: I0218 00:44:20.420566 4858 scope.go:117] "RemoveContainer" containerID="6df787764e784c8ac5e384a5692545df23cffb81f4ccfb4027bcea91c5242b7e" Feb 18 00:44:20 crc kubenswrapper[4858]: E0218 00:44:20.471545 4858 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-kmhxx_openshift-operators_560a0ca4-78ca-406c-a540-51483acdb0f8_0(060055351abc690383eac137f498ba761d659f5fa7f854a083fea3bda116eec2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:44:20 crc kubenswrapper[4858]: E0218 00:44:20.471625 4858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-kmhxx_openshift-operators_560a0ca4-78ca-406c-a540-51483acdb0f8_0(060055351abc690383eac137f498ba761d659f5fa7f854a083fea3bda116eec2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kmhxx" Feb 18 00:44:20 crc kubenswrapper[4858]: E0218 00:44:20.471657 4858 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-kmhxx_openshift-operators_560a0ca4-78ca-406c-a540-51483acdb0f8_0(060055351abc690383eac137f498ba761d659f5fa7f854a083fea3bda116eec2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kmhxx" Feb 18 00:44:20 crc kubenswrapper[4858]: E0218 00:44:20.471730 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-kmhxx_openshift-operators(560a0ca4-78ca-406c-a540-51483acdb0f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-kmhxx_openshift-operators(560a0ca4-78ca-406c-a540-51483acdb0f8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-kmhxx_openshift-operators_560a0ca4-78ca-406c-a540-51483acdb0f8_0(060055351abc690383eac137f498ba761d659f5fa7f854a083fea3bda116eec2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kmhxx" podUID="560a0ca4-78ca-406c-a540-51483acdb0f8" Feb 18 00:44:21 crc kubenswrapper[4858]: I0218 00:44:21.457454 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sr8bs_631d8e25-82dd-4462-b98d-f076e7264b67/kube-multus/2.log" Feb 18 00:44:21 crc kubenswrapper[4858]: I0218 00:44:21.457537 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sr8bs" event={"ID":"631d8e25-82dd-4462-b98d-f076e7264b67","Type":"ContainerStarted","Data":"8ab36178e488c445742d10f7c3c0388df85bfa7eda87a07ef0cb20116e7b0994"} Feb 18 00:44:27 crc kubenswrapper[4858]: I0218 00:44:27.527871 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8g2wf" Feb 18 00:44:28 crc kubenswrapper[4858]: I0218 00:44:28.418932 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs" Feb 18 00:44:28 crc kubenswrapper[4858]: I0218 00:44:28.419925 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs" Feb 18 00:44:28 crc kubenswrapper[4858]: I0218 00:44:28.897460 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs"] Feb 18 00:44:28 crc kubenswrapper[4858]: W0218 00:44:28.908538 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd28ad27c_eed0_473d_9257_1ea8f6c7291c.slice/crio-3374164d74371c4d868f8af2f3c3a57e9ec0004eecec0db5f4a152e2a2b386c5 WatchSource:0}: Error finding container 3374164d74371c4d868f8af2f3c3a57e9ec0004eecec0db5f4a152e2a2b386c5: Status 404 returned error can't find the container with id 3374164d74371c4d868f8af2f3c3a57e9ec0004eecec0db5f4a152e2a2b386c5 Feb 18 00:44:29 crc kubenswrapper[4858]: I0218 00:44:29.494627 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs" event={"ID":"d28ad27c-eed0-473d-9257-1ea8f6c7291c","Type":"ContainerStarted","Data":"3374164d74371c4d868f8af2f3c3a57e9ec0004eecec0db5f4a152e2a2b386c5"} Feb 18 00:44:31 crc kubenswrapper[4858]: I0218 00:44:31.420253 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kmhxx" Feb 18 00:44:31 crc kubenswrapper[4858]: I0218 00:44:31.420679 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kmhxx" Feb 18 00:44:32 crc kubenswrapper[4858]: I0218 00:44:32.418806 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" Feb 18 00:44:32 crc kubenswrapper[4858]: I0218 00:44:32.419599 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" Feb 18 00:44:32 crc kubenswrapper[4858]: I0218 00:44:32.420078 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c" Feb 18 00:44:32 crc kubenswrapper[4858]: I0218 00:44:32.420313 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c" Feb 18 00:44:32 crc kubenswrapper[4858]: I0218 00:44:32.460004 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-kmhxx"] Feb 18 00:44:32 crc kubenswrapper[4858]: I0218 00:44:32.514932 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kmhxx" event={"ID":"560a0ca4-78ca-406c-a540-51483acdb0f8","Type":"ContainerStarted","Data":"83d0e291d5da2381e022e0e36577bb98c137ec3357c501f0d47fa82fa4d2759b"} Feb 18 00:44:32 crc kubenswrapper[4858]: I0218 00:44:32.845765 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-xmkpw"] Feb 18 00:44:32 crc kubenswrapper[4858]: W0218 00:44:32.849430 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d03f9d0_b687_4d66_9f89_297155cf2d51.slice/crio-4e7a726d8e1ac8188f9004afaf3851f6c6ea3e769903a134b04579a50b7c18ec WatchSource:0}: Error finding container 4e7a726d8e1ac8188f9004afaf3851f6c6ea3e769903a134b04579a50b7c18ec: Status 404 returned error can't find the container with id 4e7a726d8e1ac8188f9004afaf3851f6c6ea3e769903a134b04579a50b7c18ec Feb 18 00:44:32 crc kubenswrapper[4858]: I0218 00:44:32.898334 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c"] Feb 18 00:44:32 crc kubenswrapper[4858]: W0218 00:44:32.902915 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d6270c6_d227_4243_b495_19306dfa376c.slice/crio-3fa4ae8e29baa7d79fe98326ff68e6ddef5e0be4555282675ea8baa306f1754b WatchSource:0}: Error finding container 3fa4ae8e29baa7d79fe98326ff68e6ddef5e0be4555282675ea8baa306f1754b: Status 404 returned error can't find the container with id 3fa4ae8e29baa7d79fe98326ff68e6ddef5e0be4555282675ea8baa306f1754b Feb 18 00:44:33 crc kubenswrapper[4858]: I0218 00:44:33.521449 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c" event={"ID":"5d6270c6-d227-4243-b495-19306dfa376c","Type":"ContainerStarted","Data":"3ced850aa9f584ed47803dd0c3167ccfb0a5aad312280926e5a36169f7f860ba"} Feb 18 00:44:33 crc kubenswrapper[4858]: I0218 00:44:33.521811 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c" event={"ID":"5d6270c6-d227-4243-b495-19306dfa376c","Type":"ContainerStarted","Data":"3fa4ae8e29baa7d79fe98326ff68e6ddef5e0be4555282675ea8baa306f1754b"} Feb 18 00:44:33 crc kubenswrapper[4858]: I0218 00:44:33.525613 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs" event={"ID":"d28ad27c-eed0-473d-9257-1ea8f6c7291c","Type":"ContainerStarted","Data":"748d9eb46b3c7d8c98c7e4dd4de2b2b5d66b238d25dbc8d2fcf2c897fa06d328"} Feb 18 00:44:33 crc kubenswrapper[4858]: I0218 00:44:33.526677 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" event={"ID":"5d03f9d0-b687-4d66-9f89-297155cf2d51","Type":"ContainerStarted","Data":"4e7a726d8e1ac8188f9004afaf3851f6c6ea3e769903a134b04579a50b7c18ec"} Feb 18 00:44:33 crc kubenswrapper[4858]: I0218 00:44:33.545732 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c" podStartSLOduration=31.545716256 podStartE2EDuration="31.545716256s" podCreationTimestamp="2026-02-18 00:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:44:33.540955341 +0000 UTC m=+626.846792063" watchObservedRunningTime="2026-02-18 00:44:33.545716256 +0000 UTC m=+626.851552988" Feb 18 00:44:33 crc kubenswrapper[4858]: I0218 00:44:33.571732 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs" podStartSLOduration=28.140147927 podStartE2EDuration="31.571717459s" podCreationTimestamp="2026-02-18 00:44:02 +0000 UTC" firstStartedPulling="2026-02-18 00:44:28.912730361 +0000 UTC m=+622.218567093" lastFinishedPulling="2026-02-18 00:44:32.344299893 +0000 UTC m=+625.650136625" observedRunningTime="2026-02-18 00:44:33.566828399 +0000 UTC m=+626.872665151" watchObservedRunningTime="2026-02-18 00:44:33.571717459 +0000 UTC m=+626.877554191" Feb 18 00:44:34 crc kubenswrapper[4858]: I0218 00:44:34.418695 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" Feb 18 00:44:34 crc kubenswrapper[4858]: I0218 00:44:34.419185 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" Feb 18 00:44:35 crc kubenswrapper[4858]: I0218 00:44:35.450857 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-rfgvn"] Feb 18 00:44:35 crc kubenswrapper[4858]: I0218 00:44:35.536752 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" event={"ID":"4752855a-6a66-4ba8-a484-00326c32d431","Type":"ContainerStarted","Data":"0b7fb53fed6bf377f6d3bf6a85cfc5dc7adf826c8c1a8219672bd567e354e29b"} Feb 18 00:44:35 crc kubenswrapper[4858]: I0218 00:44:35.538513 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kmhxx" event={"ID":"560a0ca4-78ca-406c-a540-51483acdb0f8","Type":"ContainerStarted","Data":"b5d6e3ac0b45d7fef389f899775313c3d42616b88edf0eed641f1e39c2816666"} Feb 18 00:44:35 crc kubenswrapper[4858]: I0218 00:44:35.540043 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" event={"ID":"5d03f9d0-b687-4d66-9f89-297155cf2d51","Type":"ContainerStarted","Data":"167baed268960e042f9f178402d5ff9871902860fb5726b6a72c7aa1ad7d88db"} Feb 18 00:44:35 crc kubenswrapper[4858]: I0218 00:44:35.540220 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" Feb 18 00:44:35 crc kubenswrapper[4858]: I0218 00:44:35.578521 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kmhxx" podStartSLOduration=30.855063902 podStartE2EDuration="33.578479674s" podCreationTimestamp="2026-02-18 00:44:02 +0000 UTC" firstStartedPulling="2026-02-18 00:44:32.474479848 +0000 UTC m=+625.780316590" lastFinishedPulling="2026-02-18 00:44:35.19789562 +0000 UTC m=+628.503732362" observedRunningTime="2026-02-18 00:44:35.55446384 +0000 UTC m=+628.860300622" watchObservedRunningTime="2026-02-18 00:44:35.578479674 +0000 UTC m=+628.884316416" Feb 18 00:44:35 crc kubenswrapper[4858]: I0218 00:44:35.580484 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" podStartSLOduration=30.233697559 podStartE2EDuration="32.580467043s" podCreationTimestamp="2026-02-18 00:44:03 +0000 UTC" firstStartedPulling="2026-02-18 00:44:32.852843768 +0000 UTC m=+626.158680540" lastFinishedPulling="2026-02-18 00:44:35.199613282 +0000 UTC m=+628.505450024" observedRunningTime="2026-02-18 00:44:35.576404664 +0000 UTC m=+628.882241416" watchObservedRunningTime="2026-02-18 00:44:35.580467043 +0000 UTC m=+628.886303845" Feb 18 00:44:40 crc kubenswrapper[4858]: I0218 00:44:40.578046 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" event={"ID":"4752855a-6a66-4ba8-a484-00326c32d431","Type":"ContainerStarted","Data":"07b0fca223abdf8149db2b3b15ebb6ecca3fe01e057a0407e741bb14a2b6cdcc"} Feb 18 00:44:40 crc kubenswrapper[4858]: I0218 00:44:40.578615 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" Feb 18 00:44:40 crc kubenswrapper[4858]: I0218 00:44:40.600156 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" podStartSLOduration=34.023891915 podStartE2EDuration="38.60013286s" podCreationTimestamp="2026-02-18 00:44:02 +0000 UTC" firstStartedPulling="2026-02-18 00:44:35.460851104 +0000 UTC m=+628.766687836" lastFinishedPulling="2026-02-18 00:44:40.037092049 +0000 UTC m=+633.342928781" observedRunningTime="2026-02-18 00:44:40.592994596 +0000 UTC m=+633.898831358" watchObservedRunningTime="2026-02-18 00:44:40.60013286 +0000 UTC m=+633.905969622" Feb 18 00:44:40 crc kubenswrapper[4858]: I0218 00:44:40.662479 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-rfgvn" Feb 18 00:44:43 crc kubenswrapper[4858]: I0218 00:44:43.391012 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-xmkpw" Feb 18 00:44:48 crc kubenswrapper[4858]: I0218 00:44:48.930198 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-mg9m6"] Feb 18 00:44:48 crc kubenswrapper[4858]: I0218 00:44:48.931779 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mg9m6" Feb 18 00:44:48 crc kubenswrapper[4858]: I0218 00:44:48.933915 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 18 00:44:48 crc kubenswrapper[4858]: I0218 00:44:48.933953 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-mg9m6"] Feb 18 00:44:48 crc kubenswrapper[4858]: I0218 00:44:48.936705 4858 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-wwgcz" Feb 18 00:44:48 crc kubenswrapper[4858]: I0218 00:44:48.940639 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 18 00:44:48 crc kubenswrapper[4858]: I0218 00:44:48.941989 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-j4mwd"] Feb 18 00:44:48 crc kubenswrapper[4858]: I0218 00:44:48.942640 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-j4mwd" Feb 18 00:44:48 crc kubenswrapper[4858]: I0218 00:44:48.945355 4858 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-7tqfb" Feb 18 00:44:48 crc kubenswrapper[4858]: I0218 00:44:48.974653 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-bjzvr"] Feb 18 00:44:48 crc kubenswrapper[4858]: I0218 00:44:48.976111 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-bjzvr" Feb 18 00:44:48 crc kubenswrapper[4858]: I0218 00:44:48.978678 4858 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-mkn5v" Feb 18 00:44:48 crc kubenswrapper[4858]: I0218 00:44:48.982685 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-j4mwd"] Feb 18 00:44:48 crc kubenswrapper[4858]: I0218 00:44:48.994750 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-bjzvr"] Feb 18 00:44:49 crc kubenswrapper[4858]: I0218 00:44:49.101705 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vzvz\" (UniqueName: \"kubernetes.io/projected/08027ec7-d21f-49db-86fa-f66a295a15ab-kube-api-access-9vzvz\") pod \"cert-manager-cainjector-cf98fcc89-mg9m6\" (UID: \"08027ec7-d21f-49db-86fa-f66a295a15ab\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-mg9m6" Feb 18 00:44:49 crc kubenswrapper[4858]: I0218 00:44:49.101749 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nshgn\" (UniqueName: \"kubernetes.io/projected/d49e20f5-2603-45f9-8250-61044120864d-kube-api-access-nshgn\") pod \"cert-manager-858654f9db-bjzvr\" (UID: \"d49e20f5-2603-45f9-8250-61044120864d\") " pod="cert-manager/cert-manager-858654f9db-bjzvr" Feb 18 00:44:49 crc kubenswrapper[4858]: I0218 00:44:49.102039 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7kgn\" (UniqueName: \"kubernetes.io/projected/9e4af7ad-05c1-4d35-9f79-dfb6aa002f52-kube-api-access-g7kgn\") pod \"cert-manager-webhook-687f57d79b-j4mwd\" (UID: \"9e4af7ad-05c1-4d35-9f79-dfb6aa002f52\") " pod="cert-manager/cert-manager-webhook-687f57d79b-j4mwd" Feb 18 00:44:49 crc kubenswrapper[4858]: I0218 00:44:49.204132 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vzvz\" (UniqueName: \"kubernetes.io/projected/08027ec7-d21f-49db-86fa-f66a295a15ab-kube-api-access-9vzvz\") pod \"cert-manager-cainjector-cf98fcc89-mg9m6\" (UID: \"08027ec7-d21f-49db-86fa-f66a295a15ab\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-mg9m6" Feb 18 00:44:49 crc kubenswrapper[4858]: I0218 00:44:49.204220 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nshgn\" (UniqueName: \"kubernetes.io/projected/d49e20f5-2603-45f9-8250-61044120864d-kube-api-access-nshgn\") pod \"cert-manager-858654f9db-bjzvr\" (UID: \"d49e20f5-2603-45f9-8250-61044120864d\") " pod="cert-manager/cert-manager-858654f9db-bjzvr" Feb 18 00:44:49 crc kubenswrapper[4858]: I0218 00:44:49.204347 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7kgn\" (UniqueName: \"kubernetes.io/projected/9e4af7ad-05c1-4d35-9f79-dfb6aa002f52-kube-api-access-g7kgn\") pod \"cert-manager-webhook-687f57d79b-j4mwd\" (UID: \"9e4af7ad-05c1-4d35-9f79-dfb6aa002f52\") " pod="cert-manager/cert-manager-webhook-687f57d79b-j4mwd" Feb 18 00:44:49 crc kubenswrapper[4858]: I0218 00:44:49.232221 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7kgn\" (UniqueName: \"kubernetes.io/projected/9e4af7ad-05c1-4d35-9f79-dfb6aa002f52-kube-api-access-g7kgn\") pod \"cert-manager-webhook-687f57d79b-j4mwd\" (UID: \"9e4af7ad-05c1-4d35-9f79-dfb6aa002f52\") " pod="cert-manager/cert-manager-webhook-687f57d79b-j4mwd" Feb 18 00:44:49 crc kubenswrapper[4858]: I0218 00:44:49.235187 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nshgn\" (UniqueName: \"kubernetes.io/projected/d49e20f5-2603-45f9-8250-61044120864d-kube-api-access-nshgn\") pod \"cert-manager-858654f9db-bjzvr\" (UID: \"d49e20f5-2603-45f9-8250-61044120864d\") " pod="cert-manager/cert-manager-858654f9db-bjzvr" Feb 18 00:44:49 crc kubenswrapper[4858]: I0218 00:44:49.235487 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vzvz\" (UniqueName: \"kubernetes.io/projected/08027ec7-d21f-49db-86fa-f66a295a15ab-kube-api-access-9vzvz\") pod \"cert-manager-cainjector-cf98fcc89-mg9m6\" (UID: \"08027ec7-d21f-49db-86fa-f66a295a15ab\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-mg9m6" Feb 18 00:44:49 crc kubenswrapper[4858]: I0218 00:44:49.255913 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mg9m6" Feb 18 00:44:49 crc kubenswrapper[4858]: I0218 00:44:49.291071 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-j4mwd" Feb 18 00:44:49 crc kubenswrapper[4858]: I0218 00:44:49.304452 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-bjzvr" Feb 18 00:44:49 crc kubenswrapper[4858]: I0218 00:44:49.755146 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-mg9m6"] Feb 18 00:44:49 crc kubenswrapper[4858]: W0218 00:44:49.756948 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08027ec7_d21f_49db_86fa_f66a295a15ab.slice/crio-a2a1e0d500f607e053cbb19811924215c8a90a8d0b4339d014e028447ae2bc08 WatchSource:0}: Error finding container a2a1e0d500f607e053cbb19811924215c8a90a8d0b4339d014e028447ae2bc08: Status 404 returned error can't find the container with id a2a1e0d500f607e053cbb19811924215c8a90a8d0b4339d014e028447ae2bc08 Feb 18 00:44:49 crc kubenswrapper[4858]: I0218 00:44:49.832721 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-j4mwd"] Feb 18 00:44:49 crc kubenswrapper[4858]: W0218 00:44:49.832758 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd49e20f5_2603_45f9_8250_61044120864d.slice/crio-1ee5bea5a127a9eed5fca74fd57a4c81cdbe05c545473add7abc973df43ebf47 WatchSource:0}: Error finding container 1ee5bea5a127a9eed5fca74fd57a4c81cdbe05c545473add7abc973df43ebf47: Status 404 returned error can't find the container with id 1ee5bea5a127a9eed5fca74fd57a4c81cdbe05c545473add7abc973df43ebf47 Feb 18 00:44:49 crc kubenswrapper[4858]: I0218 00:44:49.838796 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-bjzvr"] Feb 18 00:44:49 crc kubenswrapper[4858]: W0218 00:44:49.839747 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e4af7ad_05c1_4d35_9f79_dfb6aa002f52.slice/crio-cfd3f8d2abd2bfd694dae2922b7fa612abe64f58cb9223b62e0ec3a7b41512ef WatchSource:0}: Error finding container cfd3f8d2abd2bfd694dae2922b7fa612abe64f58cb9223b62e0ec3a7b41512ef: Status 404 returned error can't find the container with id cfd3f8d2abd2bfd694dae2922b7fa612abe64f58cb9223b62e0ec3a7b41512ef Feb 18 00:44:50 crc kubenswrapper[4858]: I0218 00:44:50.632566 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mg9m6" event={"ID":"08027ec7-d21f-49db-86fa-f66a295a15ab","Type":"ContainerStarted","Data":"a2a1e0d500f607e053cbb19811924215c8a90a8d0b4339d014e028447ae2bc08"} Feb 18 00:44:50 crc kubenswrapper[4858]: I0218 00:44:50.633745 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-j4mwd" event={"ID":"9e4af7ad-05c1-4d35-9f79-dfb6aa002f52","Type":"ContainerStarted","Data":"cfd3f8d2abd2bfd694dae2922b7fa612abe64f58cb9223b62e0ec3a7b41512ef"} Feb 18 00:44:50 crc kubenswrapper[4858]: I0218 00:44:50.635163 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-bjzvr" event={"ID":"d49e20f5-2603-45f9-8250-61044120864d","Type":"ContainerStarted","Data":"1ee5bea5a127a9eed5fca74fd57a4c81cdbe05c545473add7abc973df43ebf47"} Feb 18 00:44:54 crc kubenswrapper[4858]: I0218 00:44:54.657652 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mg9m6" event={"ID":"08027ec7-d21f-49db-86fa-f66a295a15ab","Type":"ContainerStarted","Data":"d9a97d04c16dbc6eaf643b148c822b5c369708a6e1672846e2a759c7e069c78f"} Feb 18 00:44:54 crc kubenswrapper[4858]: I0218 00:44:54.660521 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-bjzvr" event={"ID":"d49e20f5-2603-45f9-8250-61044120864d","Type":"ContainerStarted","Data":"e82054d7b1e23ad75bac7eb189edc4ad26b26e0bf6e5ba2cc62a6ec46fe0e19f"} Feb 18 00:44:54 crc kubenswrapper[4858]: I0218 00:44:54.663569 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-j4mwd" event={"ID":"9e4af7ad-05c1-4d35-9f79-dfb6aa002f52","Type":"ContainerStarted","Data":"7565c3e5fdc3be93404b357b9839da8029bfbb4bee76482016a8e54fdade0f12"} Feb 18 00:44:54 crc kubenswrapper[4858]: I0218 00:44:54.663809 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-j4mwd" Feb 18 00:44:54 crc kubenswrapper[4858]: I0218 00:44:54.681713 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mg9m6" podStartSLOduration=2.791487828 podStartE2EDuration="6.681683112s" podCreationTimestamp="2026-02-18 00:44:48 +0000 UTC" firstStartedPulling="2026-02-18 00:44:49.759242079 +0000 UTC m=+643.065078841" lastFinishedPulling="2026-02-18 00:44:53.649437393 +0000 UTC m=+646.955274125" observedRunningTime="2026-02-18 00:44:54.680415671 +0000 UTC m=+647.986252453" watchObservedRunningTime="2026-02-18 00:44:54.681683112 +0000 UTC m=+647.987519894" Feb 18 00:44:54 crc kubenswrapper[4858]: I0218 00:44:54.709795 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-j4mwd" podStartSLOduration=2.961626276 podStartE2EDuration="6.709775105s" podCreationTimestamp="2026-02-18 00:44:48 +0000 UTC" firstStartedPulling="2026-02-18 00:44:49.840748711 +0000 UTC m=+643.146585453" lastFinishedPulling="2026-02-18 00:44:53.58889751 +0000 UTC m=+646.894734282" observedRunningTime="2026-02-18 00:44:54.708596286 +0000 UTC m=+648.014433058" watchObservedRunningTime="2026-02-18 00:44:54.709775105 +0000 UTC m=+648.015611847" Feb 18 00:44:54 crc kubenswrapper[4858]: I0218 00:44:54.728597 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-bjzvr" podStartSLOduration=2.97536362 podStartE2EDuration="6.728570392s" podCreationTimestamp="2026-02-18 00:44:48 +0000 UTC" firstStartedPulling="2026-02-18 00:44:49.834556511 +0000 UTC m=+643.140393283" lastFinishedPulling="2026-02-18 00:44:53.587763313 +0000 UTC m=+646.893600055" observedRunningTime="2026-02-18 00:44:54.723443018 +0000 UTC m=+648.029279790" watchObservedRunningTime="2026-02-18 00:44:54.728570392 +0000 UTC m=+648.034407134" Feb 18 00:44:59 crc kubenswrapper[4858]: I0218 00:44:59.295172 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-j4mwd" Feb 18 00:45:00 crc kubenswrapper[4858]: I0218 00:45:00.160936 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522925-spxq2"] Feb 18 00:45:00 crc kubenswrapper[4858]: I0218 00:45:00.161949 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-spxq2" Feb 18 00:45:00 crc kubenswrapper[4858]: I0218 00:45:00.162413 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3216e4c5-ff7a-45e4-9064-dd234a355dfb-config-volume\") pod \"collect-profiles-29522925-spxq2\" (UID: \"3216e4c5-ff7a-45e4-9064-dd234a355dfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-spxq2" Feb 18 00:45:00 crc kubenswrapper[4858]: I0218 00:45:00.162485 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3216e4c5-ff7a-45e4-9064-dd234a355dfb-secret-volume\") pod \"collect-profiles-29522925-spxq2\" (UID: \"3216e4c5-ff7a-45e4-9064-dd234a355dfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-spxq2" Feb 18 00:45:00 crc kubenswrapper[4858]: I0218 00:45:00.162565 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdqw9\" (UniqueName: \"kubernetes.io/projected/3216e4c5-ff7a-45e4-9064-dd234a355dfb-kube-api-access-qdqw9\") pod \"collect-profiles-29522925-spxq2\" (UID: \"3216e4c5-ff7a-45e4-9064-dd234a355dfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-spxq2" Feb 18 00:45:00 crc kubenswrapper[4858]: I0218 00:45:00.165296 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 00:45:00 crc kubenswrapper[4858]: I0218 00:45:00.167717 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 00:45:00 crc kubenswrapper[4858]: I0218 00:45:00.170071 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522925-spxq2"] Feb 18 00:45:00 crc kubenswrapper[4858]: I0218 00:45:00.264073 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3216e4c5-ff7a-45e4-9064-dd234a355dfb-secret-volume\") pod \"collect-profiles-29522925-spxq2\" (UID: \"3216e4c5-ff7a-45e4-9064-dd234a355dfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-spxq2" Feb 18 00:45:00 crc kubenswrapper[4858]: I0218 00:45:00.264392 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdqw9\" (UniqueName: \"kubernetes.io/projected/3216e4c5-ff7a-45e4-9064-dd234a355dfb-kube-api-access-qdqw9\") pod \"collect-profiles-29522925-spxq2\" (UID: \"3216e4c5-ff7a-45e4-9064-dd234a355dfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-spxq2" Feb 18 00:45:00 crc kubenswrapper[4858]: I0218 00:45:00.264866 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3216e4c5-ff7a-45e4-9064-dd234a355dfb-config-volume\") pod \"collect-profiles-29522925-spxq2\" (UID: \"3216e4c5-ff7a-45e4-9064-dd234a355dfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-spxq2" Feb 18 00:45:00 crc kubenswrapper[4858]: I0218 00:45:00.265694 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3216e4c5-ff7a-45e4-9064-dd234a355dfb-config-volume\") pod \"collect-profiles-29522925-spxq2\" (UID: \"3216e4c5-ff7a-45e4-9064-dd234a355dfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-spxq2" Feb 18 00:45:00 crc kubenswrapper[4858]: I0218 00:45:00.270865 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3216e4c5-ff7a-45e4-9064-dd234a355dfb-secret-volume\") pod \"collect-profiles-29522925-spxq2\" (UID: \"3216e4c5-ff7a-45e4-9064-dd234a355dfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-spxq2" Feb 18 00:45:00 crc kubenswrapper[4858]: I0218 00:45:00.290702 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdqw9\" (UniqueName: \"kubernetes.io/projected/3216e4c5-ff7a-45e4-9064-dd234a355dfb-kube-api-access-qdqw9\") pod \"collect-profiles-29522925-spxq2\" (UID: \"3216e4c5-ff7a-45e4-9064-dd234a355dfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-spxq2" Feb 18 00:45:00 crc kubenswrapper[4858]: I0218 00:45:00.478396 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-spxq2" Feb 18 00:45:00 crc kubenswrapper[4858]: I0218 00:45:00.881974 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522925-spxq2"] Feb 18 00:45:01 crc kubenswrapper[4858]: I0218 00:45:01.712538 4858 generic.go:334] "Generic (PLEG): container finished" podID="3216e4c5-ff7a-45e4-9064-dd234a355dfb" containerID="9393af52b93b741680066896084ffc0ce4c793f8f694265cb9ebb37ca506d732" exitCode=0 Feb 18 00:45:01 crc kubenswrapper[4858]: I0218 00:45:01.712625 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-spxq2" event={"ID":"3216e4c5-ff7a-45e4-9064-dd234a355dfb","Type":"ContainerDied","Data":"9393af52b93b741680066896084ffc0ce4c793f8f694265cb9ebb37ca506d732"} Feb 18 00:45:01 crc kubenswrapper[4858]: I0218 00:45:01.712783 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-spxq2" event={"ID":"3216e4c5-ff7a-45e4-9064-dd234a355dfb","Type":"ContainerStarted","Data":"3410b92013f85a56664f6c8c348826b8e7a284adbea08397958fe1823d610adc"} Feb 18 00:45:03 crc kubenswrapper[4858]: I0218 00:45:03.014909 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-spxq2" Feb 18 00:45:03 crc kubenswrapper[4858]: I0218 00:45:03.146943 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3216e4c5-ff7a-45e4-9064-dd234a355dfb-config-volume\") pod \"3216e4c5-ff7a-45e4-9064-dd234a355dfb\" (UID: \"3216e4c5-ff7a-45e4-9064-dd234a355dfb\") " Feb 18 00:45:03 crc kubenswrapper[4858]: I0218 00:45:03.147160 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3216e4c5-ff7a-45e4-9064-dd234a355dfb-secret-volume\") pod \"3216e4c5-ff7a-45e4-9064-dd234a355dfb\" (UID: \"3216e4c5-ff7a-45e4-9064-dd234a355dfb\") " Feb 18 00:45:03 crc kubenswrapper[4858]: I0218 00:45:03.147314 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdqw9\" (UniqueName: \"kubernetes.io/projected/3216e4c5-ff7a-45e4-9064-dd234a355dfb-kube-api-access-qdqw9\") pod \"3216e4c5-ff7a-45e4-9064-dd234a355dfb\" (UID: \"3216e4c5-ff7a-45e4-9064-dd234a355dfb\") " Feb 18 00:45:03 crc kubenswrapper[4858]: I0218 00:45:03.147679 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3216e4c5-ff7a-45e4-9064-dd234a355dfb-config-volume" (OuterVolumeSpecName: "config-volume") pod "3216e4c5-ff7a-45e4-9064-dd234a355dfb" (UID: "3216e4c5-ff7a-45e4-9064-dd234a355dfb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:45:03 crc kubenswrapper[4858]: I0218 00:45:03.151883 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3216e4c5-ff7a-45e4-9064-dd234a355dfb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3216e4c5-ff7a-45e4-9064-dd234a355dfb" (UID: "3216e4c5-ff7a-45e4-9064-dd234a355dfb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:45:03 crc kubenswrapper[4858]: I0218 00:45:03.152933 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3216e4c5-ff7a-45e4-9064-dd234a355dfb-kube-api-access-qdqw9" (OuterVolumeSpecName: "kube-api-access-qdqw9") pod "3216e4c5-ff7a-45e4-9064-dd234a355dfb" (UID: "3216e4c5-ff7a-45e4-9064-dd234a355dfb"). InnerVolumeSpecName "kube-api-access-qdqw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:45:03 crc kubenswrapper[4858]: I0218 00:45:03.248789 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3216e4c5-ff7a-45e4-9064-dd234a355dfb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 00:45:03 crc kubenswrapper[4858]: I0218 00:45:03.248864 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3216e4c5-ff7a-45e4-9064-dd234a355dfb-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 00:45:03 crc kubenswrapper[4858]: I0218 00:45:03.248889 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdqw9\" (UniqueName: \"kubernetes.io/projected/3216e4c5-ff7a-45e4-9064-dd234a355dfb-kube-api-access-qdqw9\") on node \"crc\" DevicePath \"\"" Feb 18 00:45:03 crc kubenswrapper[4858]: I0218 00:45:03.729571 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-spxq2" event={"ID":"3216e4c5-ff7a-45e4-9064-dd234a355dfb","Type":"ContainerDied","Data":"3410b92013f85a56664f6c8c348826b8e7a284adbea08397958fe1823d610adc"} Feb 18 00:45:03 crc kubenswrapper[4858]: I0218 00:45:03.729632 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3410b92013f85a56664f6c8c348826b8e7a284adbea08397958fe1823d610adc" Feb 18 00:45:03 crc kubenswrapper[4858]: I0218 00:45:03.729633 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-spxq2" Feb 18 00:45:22 crc kubenswrapper[4858]: I0218 00:45:22.479847 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l"] Feb 18 00:45:22 crc kubenswrapper[4858]: E0218 00:45:22.480636 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3216e4c5-ff7a-45e4-9064-dd234a355dfb" containerName="collect-profiles" Feb 18 00:45:22 crc kubenswrapper[4858]: I0218 00:45:22.480651 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3216e4c5-ff7a-45e4-9064-dd234a355dfb" containerName="collect-profiles" Feb 18 00:45:22 crc kubenswrapper[4858]: I0218 00:45:22.480774 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3216e4c5-ff7a-45e4-9064-dd234a355dfb" containerName="collect-profiles" Feb 18 00:45:22 crc kubenswrapper[4858]: I0218 00:45:22.481705 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l" Feb 18 00:45:22 crc kubenswrapper[4858]: I0218 00:45:22.488560 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l"] Feb 18 00:45:22 crc kubenswrapper[4858]: I0218 00:45:22.517020 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 00:45:22 crc kubenswrapper[4858]: I0218 00:45:22.518063 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/454c1998-5aac-4db1-a204-bbf491c27b13-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l\" (UID: \"454c1998-5aac-4db1-a204-bbf491c27b13\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l" Feb 18 00:45:22 crc kubenswrapper[4858]: I0218 00:45:22.518316 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrmpc\" (UniqueName: \"kubernetes.io/projected/454c1998-5aac-4db1-a204-bbf491c27b13-kube-api-access-jrmpc\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l\" (UID: \"454c1998-5aac-4db1-a204-bbf491c27b13\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l" Feb 18 00:45:22 crc kubenswrapper[4858]: I0218 00:45:22.518585 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/454c1998-5aac-4db1-a204-bbf491c27b13-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l\" (UID: \"454c1998-5aac-4db1-a204-bbf491c27b13\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l" Feb 18 00:45:22 crc kubenswrapper[4858]: I0218 00:45:22.619714 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrmpc\" (UniqueName: \"kubernetes.io/projected/454c1998-5aac-4db1-a204-bbf491c27b13-kube-api-access-jrmpc\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l\" (UID: \"454c1998-5aac-4db1-a204-bbf491c27b13\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l" Feb 18 00:45:22 crc kubenswrapper[4858]: I0218 00:45:22.620277 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/454c1998-5aac-4db1-a204-bbf491c27b13-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l\" (UID: \"454c1998-5aac-4db1-a204-bbf491c27b13\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l" Feb 18 00:45:22 crc kubenswrapper[4858]: I0218 00:45:22.620578 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/454c1998-5aac-4db1-a204-bbf491c27b13-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l\" (UID: \"454c1998-5aac-4db1-a204-bbf491c27b13\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l" Feb 18 00:45:22 crc kubenswrapper[4858]: I0218 00:45:22.620857 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/454c1998-5aac-4db1-a204-bbf491c27b13-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l\" (UID: \"454c1998-5aac-4db1-a204-bbf491c27b13\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l" Feb 18 00:45:22 crc kubenswrapper[4858]: I0218 00:45:22.621085 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/454c1998-5aac-4db1-a204-bbf491c27b13-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l\" (UID: \"454c1998-5aac-4db1-a204-bbf491c27b13\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l" Feb 18 00:45:22 crc kubenswrapper[4858]: I0218 00:45:22.642762 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrmpc\" (UniqueName: \"kubernetes.io/projected/454c1998-5aac-4db1-a204-bbf491c27b13-kube-api-access-jrmpc\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l\" (UID: \"454c1998-5aac-4db1-a204-bbf491c27b13\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l" Feb 18 00:45:22 crc kubenswrapper[4858]: I0218 00:45:22.835219 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l" Feb 18 00:45:23 crc kubenswrapper[4858]: I0218 00:45:23.131139 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l"] Feb 18 00:45:23 crc kubenswrapper[4858]: I0218 00:45:23.880657 4858 generic.go:334] "Generic (PLEG): container finished" podID="454c1998-5aac-4db1-a204-bbf491c27b13" containerID="6b623887b7083f097945a498d2a1e7d865361cbe5995617bdb0af6ab8f50b5c0" exitCode=0 Feb 18 00:45:23 crc kubenswrapper[4858]: I0218 00:45:23.880749 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l" event={"ID":"454c1998-5aac-4db1-a204-bbf491c27b13","Type":"ContainerDied","Data":"6b623887b7083f097945a498d2a1e7d865361cbe5995617bdb0af6ab8f50b5c0"} Feb 18 00:45:23 crc kubenswrapper[4858]: I0218 00:45:23.880958 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l" event={"ID":"454c1998-5aac-4db1-a204-bbf491c27b13","Type":"ContainerStarted","Data":"2318527f1c526946eccf296382999b61251874b508d94ab68bef1832de767fb8"} Feb 18 00:45:24 crc kubenswrapper[4858]: I0218 00:45:24.410557 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 18 00:45:24 crc kubenswrapper[4858]: I0218 00:45:24.411633 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 18 00:45:24 crc kubenswrapper[4858]: I0218 00:45:24.413986 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 18 00:45:24 crc kubenswrapper[4858]: I0218 00:45:24.414787 4858 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-bb25d" Feb 18 00:45:24 crc kubenswrapper[4858]: I0218 00:45:24.414986 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 18 00:45:24 crc kubenswrapper[4858]: I0218 00:45:24.420751 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 18 00:45:24 crc kubenswrapper[4858]: I0218 00:45:24.544186 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dncg4\" (UniqueName: \"kubernetes.io/projected/f962dced-6198-4cb9-8eda-91b0da46c110-kube-api-access-dncg4\") pod \"minio\" (UID: \"f962dced-6198-4cb9-8eda-91b0da46c110\") " pod="minio-dev/minio" Feb 18 00:45:24 crc kubenswrapper[4858]: I0218 00:45:24.544347 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-35421ca5-4068-4f17-b786-179c207885bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35421ca5-4068-4f17-b786-179c207885bf\") pod \"minio\" (UID: \"f962dced-6198-4cb9-8eda-91b0da46c110\") " pod="minio-dev/minio" Feb 18 00:45:24 crc kubenswrapper[4858]: I0218 00:45:24.645855 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dncg4\" (UniqueName: \"kubernetes.io/projected/f962dced-6198-4cb9-8eda-91b0da46c110-kube-api-access-dncg4\") pod \"minio\" (UID: \"f962dced-6198-4cb9-8eda-91b0da46c110\") " pod="minio-dev/minio" Feb 18 00:45:24 crc kubenswrapper[4858]: I0218 00:45:24.645924 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-35421ca5-4068-4f17-b786-179c207885bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35421ca5-4068-4f17-b786-179c207885bf\") pod \"minio\" (UID: \"f962dced-6198-4cb9-8eda-91b0da46c110\") " pod="minio-dev/minio" Feb 18 00:45:24 crc kubenswrapper[4858]: I0218 00:45:24.649269 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:45:24 crc kubenswrapper[4858]: I0218 00:45:24.649302 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-35421ca5-4068-4f17-b786-179c207885bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35421ca5-4068-4f17-b786-179c207885bf\") pod \"minio\" (UID: \"f962dced-6198-4cb9-8eda-91b0da46c110\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/645eceebf48eafdcdec7655746ef4d396c71e623bc83b414adae3b870f56ecef/globalmount\"" pod="minio-dev/minio" Feb 18 00:45:24 crc kubenswrapper[4858]: I0218 00:45:24.677340 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-35421ca5-4068-4f17-b786-179c207885bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35421ca5-4068-4f17-b786-179c207885bf\") pod \"minio\" (UID: \"f962dced-6198-4cb9-8eda-91b0da46c110\") " pod="minio-dev/minio" Feb 18 00:45:24 crc kubenswrapper[4858]: I0218 00:45:24.681403 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dncg4\" (UniqueName: \"kubernetes.io/projected/f962dced-6198-4cb9-8eda-91b0da46c110-kube-api-access-dncg4\") pod \"minio\" (UID: \"f962dced-6198-4cb9-8eda-91b0da46c110\") " pod="minio-dev/minio" Feb 18 00:45:24 crc kubenswrapper[4858]: I0218 00:45:24.741280 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 18 00:45:24 crc kubenswrapper[4858]: I0218 00:45:24.925410 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 18 00:45:25 crc kubenswrapper[4858]: I0218 00:45:25.893696 4858 generic.go:334] "Generic (PLEG): container finished" podID="454c1998-5aac-4db1-a204-bbf491c27b13" containerID="6dfb2c2e5977d5760a8c9be465f724bbef03d11167d27184f22c8b90c9bd1779" exitCode=0 Feb 18 00:45:25 crc kubenswrapper[4858]: I0218 00:45:25.893749 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l" event={"ID":"454c1998-5aac-4db1-a204-bbf491c27b13","Type":"ContainerDied","Data":"6dfb2c2e5977d5760a8c9be465f724bbef03d11167d27184f22c8b90c9bd1779"} Feb 18 00:45:25 crc kubenswrapper[4858]: I0218 00:45:25.895413 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"f962dced-6198-4cb9-8eda-91b0da46c110","Type":"ContainerStarted","Data":"ecc56c325b7080c0a50166f167ff076c9dc4503e375a863fa24c94ef7c9763e1"} Feb 18 00:45:29 crc kubenswrapper[4858]: I0218 00:45:29.927927 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"f962dced-6198-4cb9-8eda-91b0da46c110","Type":"ContainerStarted","Data":"692d380a582deaa116b742c9eeb7982decccffa5740928d1c9747ae5c1d967cb"} Feb 18 00:45:29 crc kubenswrapper[4858]: I0218 00:45:29.932242 4858 generic.go:334] "Generic (PLEG): container finished" podID="454c1998-5aac-4db1-a204-bbf491c27b13" containerID="f23edba9ec3514fb5a540d8b1fc1b6f41daf05f7ff9bb4fb6e9d251cb2eb51d2" exitCode=0 Feb 18 00:45:29 crc kubenswrapper[4858]: I0218 00:45:29.932288 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l" event={"ID":"454c1998-5aac-4db1-a204-bbf491c27b13","Type":"ContainerDied","Data":"f23edba9ec3514fb5a540d8b1fc1b6f41daf05f7ff9bb4fb6e9d251cb2eb51d2"} Feb 18 00:45:29 crc kubenswrapper[4858]: I0218 00:45:29.949118 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.078016179 podStartE2EDuration="7.949097932s" podCreationTimestamp="2026-02-18 00:45:22 +0000 UTC" firstStartedPulling="2026-02-18 00:45:24.937058564 +0000 UTC m=+678.242895296" lastFinishedPulling="2026-02-18 00:45:28.808140277 +0000 UTC m=+682.113977049" observedRunningTime="2026-02-18 00:45:29.948056859 +0000 UTC m=+683.253893661" watchObservedRunningTime="2026-02-18 00:45:29.949097932 +0000 UTC m=+683.254934674" Feb 18 00:45:31 crc kubenswrapper[4858]: I0218 00:45:31.326386 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l" Feb 18 00:45:31 crc kubenswrapper[4858]: I0218 00:45:31.455935 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/454c1998-5aac-4db1-a204-bbf491c27b13-util\") pod \"454c1998-5aac-4db1-a204-bbf491c27b13\" (UID: \"454c1998-5aac-4db1-a204-bbf491c27b13\") " Feb 18 00:45:31 crc kubenswrapper[4858]: I0218 00:45:31.456015 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/454c1998-5aac-4db1-a204-bbf491c27b13-bundle\") pod \"454c1998-5aac-4db1-a204-bbf491c27b13\" (UID: \"454c1998-5aac-4db1-a204-bbf491c27b13\") " Feb 18 00:45:31 crc kubenswrapper[4858]: I0218 00:45:31.456056 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrmpc\" (UniqueName: \"kubernetes.io/projected/454c1998-5aac-4db1-a204-bbf491c27b13-kube-api-access-jrmpc\") pod \"454c1998-5aac-4db1-a204-bbf491c27b13\" (UID: \"454c1998-5aac-4db1-a204-bbf491c27b13\") " Feb 18 00:45:31 crc kubenswrapper[4858]: I0218 00:45:31.457568 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/454c1998-5aac-4db1-a204-bbf491c27b13-bundle" (OuterVolumeSpecName: "bundle") pod "454c1998-5aac-4db1-a204-bbf491c27b13" (UID: "454c1998-5aac-4db1-a204-bbf491c27b13"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:45:31 crc kubenswrapper[4858]: I0218 00:45:31.462434 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/454c1998-5aac-4db1-a204-bbf491c27b13-kube-api-access-jrmpc" (OuterVolumeSpecName: "kube-api-access-jrmpc") pod "454c1998-5aac-4db1-a204-bbf491c27b13" (UID: "454c1998-5aac-4db1-a204-bbf491c27b13"). InnerVolumeSpecName "kube-api-access-jrmpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:45:31 crc kubenswrapper[4858]: I0218 00:45:31.469670 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/454c1998-5aac-4db1-a204-bbf491c27b13-util" (OuterVolumeSpecName: "util") pod "454c1998-5aac-4db1-a204-bbf491c27b13" (UID: "454c1998-5aac-4db1-a204-bbf491c27b13"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:45:31 crc kubenswrapper[4858]: I0218 00:45:31.557704 4858 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/454c1998-5aac-4db1-a204-bbf491c27b13-util\") on node \"crc\" DevicePath \"\"" Feb 18 00:45:31 crc kubenswrapper[4858]: I0218 00:45:31.557753 4858 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/454c1998-5aac-4db1-a204-bbf491c27b13-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:45:31 crc kubenswrapper[4858]: I0218 00:45:31.557774 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrmpc\" (UniqueName: \"kubernetes.io/projected/454c1998-5aac-4db1-a204-bbf491c27b13-kube-api-access-jrmpc\") on node \"crc\" DevicePath \"\"" Feb 18 00:45:31 crc kubenswrapper[4858]: I0218 00:45:31.951424 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l" event={"ID":"454c1998-5aac-4db1-a204-bbf491c27b13","Type":"ContainerDied","Data":"2318527f1c526946eccf296382999b61251874b508d94ab68bef1832de767fb8"} Feb 18 00:45:31 crc kubenswrapper[4858]: I0218 00:45:31.951472 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2318527f1c526946eccf296382999b61251874b508d94ab68bef1832de767fb8" Feb 18 00:45:31 crc kubenswrapper[4858]: I0218 00:45:31.951573 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l" Feb 18 00:45:36 crc kubenswrapper[4858]: I0218 00:45:36.869433 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5c5fb49d49-cxcxg"] Feb 18 00:45:36 crc kubenswrapper[4858]: E0218 00:45:36.870249 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="454c1998-5aac-4db1-a204-bbf491c27b13" containerName="pull" Feb 18 00:45:36 crc kubenswrapper[4858]: I0218 00:45:36.870264 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="454c1998-5aac-4db1-a204-bbf491c27b13" containerName="pull" Feb 18 00:45:36 crc kubenswrapper[4858]: E0218 00:45:36.870284 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="454c1998-5aac-4db1-a204-bbf491c27b13" containerName="util" Feb 18 00:45:36 crc kubenswrapper[4858]: I0218 00:45:36.870292 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="454c1998-5aac-4db1-a204-bbf491c27b13" containerName="util" Feb 18 00:45:36 crc kubenswrapper[4858]: E0218 00:45:36.870306 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="454c1998-5aac-4db1-a204-bbf491c27b13" containerName="extract" Feb 18 00:45:36 crc kubenswrapper[4858]: I0218 00:45:36.870314 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="454c1998-5aac-4db1-a204-bbf491c27b13" containerName="extract" Feb 18 00:45:36 crc kubenswrapper[4858]: I0218 00:45:36.870453 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="454c1998-5aac-4db1-a204-bbf491c27b13" containerName="extract" Feb 18 00:45:36 crc kubenswrapper[4858]: I0218 00:45:36.871216 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5c5fb49d49-cxcxg" Feb 18 00:45:36 crc kubenswrapper[4858]: I0218 00:45:36.873927 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 18 00:45:36 crc kubenswrapper[4858]: I0218 00:45:36.874067 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 18 00:45:36 crc kubenswrapper[4858]: I0218 00:45:36.873913 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 18 00:45:36 crc kubenswrapper[4858]: I0218 00:45:36.873939 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 18 00:45:36 crc kubenswrapper[4858]: I0218 00:45:36.875429 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-zj6zp" Feb 18 00:45:36 crc kubenswrapper[4858]: I0218 00:45:36.886539 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5c5fb49d49-cxcxg"] Feb 18 00:45:36 crc kubenswrapper[4858]: I0218 00:45:36.888764 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 18 00:45:37 crc kubenswrapper[4858]: I0218 00:45:37.027395 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b2422dde-b68b-41d0-acbf-2473c28f5177-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5c5fb49d49-cxcxg\" (UID: \"b2422dde-b68b-41d0-acbf-2473c28f5177\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c5fb49d49-cxcxg" Feb 18 00:45:37 crc kubenswrapper[4858]: I0218 00:45:37.027445 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b2422dde-b68b-41d0-acbf-2473c28f5177-webhook-cert\") pod \"loki-operator-controller-manager-5c5fb49d49-cxcxg\" (UID: \"b2422dde-b68b-41d0-acbf-2473c28f5177\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c5fb49d49-cxcxg" Feb 18 00:45:37 crc kubenswrapper[4858]: I0218 00:45:37.027470 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2422dde-b68b-41d0-acbf-2473c28f5177-apiservice-cert\") pod \"loki-operator-controller-manager-5c5fb49d49-cxcxg\" (UID: \"b2422dde-b68b-41d0-acbf-2473c28f5177\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c5fb49d49-cxcxg" Feb 18 00:45:37 crc kubenswrapper[4858]: I0218 00:45:37.027504 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/b2422dde-b68b-41d0-acbf-2473c28f5177-manager-config\") pod \"loki-operator-controller-manager-5c5fb49d49-cxcxg\" (UID: \"b2422dde-b68b-41d0-acbf-2473c28f5177\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c5fb49d49-cxcxg" Feb 18 00:45:37 crc kubenswrapper[4858]: I0218 00:45:37.027532 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5bls\" (UniqueName: \"kubernetes.io/projected/b2422dde-b68b-41d0-acbf-2473c28f5177-kube-api-access-v5bls\") pod \"loki-operator-controller-manager-5c5fb49d49-cxcxg\" (UID: \"b2422dde-b68b-41d0-acbf-2473c28f5177\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c5fb49d49-cxcxg" Feb 18 00:45:37 crc kubenswrapper[4858]: I0218 00:45:37.128706 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b2422dde-b68b-41d0-acbf-2473c28f5177-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5c5fb49d49-cxcxg\" (UID: \"b2422dde-b68b-41d0-acbf-2473c28f5177\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c5fb49d49-cxcxg" Feb 18 00:45:37 crc kubenswrapper[4858]: I0218 00:45:37.128743 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b2422dde-b68b-41d0-acbf-2473c28f5177-webhook-cert\") pod \"loki-operator-controller-manager-5c5fb49d49-cxcxg\" (UID: \"b2422dde-b68b-41d0-acbf-2473c28f5177\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c5fb49d49-cxcxg" Feb 18 00:45:37 crc kubenswrapper[4858]: I0218 00:45:37.128766 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2422dde-b68b-41d0-acbf-2473c28f5177-apiservice-cert\") pod \"loki-operator-controller-manager-5c5fb49d49-cxcxg\" (UID: \"b2422dde-b68b-41d0-acbf-2473c28f5177\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c5fb49d49-cxcxg" Feb 18 00:45:37 crc kubenswrapper[4858]: I0218 00:45:37.128789 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/b2422dde-b68b-41d0-acbf-2473c28f5177-manager-config\") pod \"loki-operator-controller-manager-5c5fb49d49-cxcxg\" (UID: \"b2422dde-b68b-41d0-acbf-2473c28f5177\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c5fb49d49-cxcxg" Feb 18 00:45:37 crc kubenswrapper[4858]: I0218 00:45:37.128819 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5bls\" (UniqueName: \"kubernetes.io/projected/b2422dde-b68b-41d0-acbf-2473c28f5177-kube-api-access-v5bls\") pod \"loki-operator-controller-manager-5c5fb49d49-cxcxg\" (UID: \"b2422dde-b68b-41d0-acbf-2473c28f5177\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c5fb49d49-cxcxg" Feb 18 00:45:37 crc kubenswrapper[4858]: I0218 00:45:37.129883 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/b2422dde-b68b-41d0-acbf-2473c28f5177-manager-config\") pod \"loki-operator-controller-manager-5c5fb49d49-cxcxg\" (UID: \"b2422dde-b68b-41d0-acbf-2473c28f5177\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c5fb49d49-cxcxg" Feb 18 00:45:37 crc kubenswrapper[4858]: I0218 00:45:37.136008 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2422dde-b68b-41d0-acbf-2473c28f5177-apiservice-cert\") pod \"loki-operator-controller-manager-5c5fb49d49-cxcxg\" (UID: \"b2422dde-b68b-41d0-acbf-2473c28f5177\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c5fb49d49-cxcxg" Feb 18 00:45:37 crc kubenswrapper[4858]: I0218 00:45:37.136068 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b2422dde-b68b-41d0-acbf-2473c28f5177-webhook-cert\") pod \"loki-operator-controller-manager-5c5fb49d49-cxcxg\" (UID: \"b2422dde-b68b-41d0-acbf-2473c28f5177\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c5fb49d49-cxcxg" Feb 18 00:45:37 crc kubenswrapper[4858]: I0218 00:45:37.137965 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b2422dde-b68b-41d0-acbf-2473c28f5177-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5c5fb49d49-cxcxg\" (UID: \"b2422dde-b68b-41d0-acbf-2473c28f5177\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c5fb49d49-cxcxg" Feb 18 00:45:37 crc kubenswrapper[4858]: I0218 00:45:37.149656 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5bls\" (UniqueName: \"kubernetes.io/projected/b2422dde-b68b-41d0-acbf-2473c28f5177-kube-api-access-v5bls\") pod \"loki-operator-controller-manager-5c5fb49d49-cxcxg\" (UID: \"b2422dde-b68b-41d0-acbf-2473c28f5177\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c5fb49d49-cxcxg" Feb 18 00:45:37 crc kubenswrapper[4858]: I0218 00:45:37.192745 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5c5fb49d49-cxcxg" Feb 18 00:45:37 crc kubenswrapper[4858]: I0218 00:45:37.594801 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5c5fb49d49-cxcxg"] Feb 18 00:45:37 crc kubenswrapper[4858]: I0218 00:45:37.988516 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5c5fb49d49-cxcxg" event={"ID":"b2422dde-b68b-41d0-acbf-2473c28f5177","Type":"ContainerStarted","Data":"8af7dfaa9d5bdf32fc38941c83fb2094ef3637754a7d3e208c3800da81a53a09"} Feb 18 00:45:45 crc kubenswrapper[4858]: I0218 00:45:45.031568 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5c5fb49d49-cxcxg" event={"ID":"b2422dde-b68b-41d0-acbf-2473c28f5177","Type":"ContainerStarted","Data":"c6114297db0ee638864bc086feb5c803180b5f40481aa22b88daed2cef8cb0e0"} Feb 18 00:45:51 crc kubenswrapper[4858]: I0218 00:45:51.068208 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5c5fb49d49-cxcxg" event={"ID":"b2422dde-b68b-41d0-acbf-2473c28f5177","Type":"ContainerStarted","Data":"64616a30fe5067a2968d53988a142a2999a9627427dc8e9a549bf8d3c7603be3"} Feb 18 00:45:51 crc kubenswrapper[4858]: I0218 00:45:51.069034 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5c5fb49d49-cxcxg" Feb 18 00:45:51 crc kubenswrapper[4858]: I0218 00:45:51.071530 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-5c5fb49d49-cxcxg" Feb 18 00:45:51 crc kubenswrapper[4858]: I0218 00:45:51.093881 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-5c5fb49d49-cxcxg" podStartSLOduration=1.83004038 podStartE2EDuration="15.093864568s" podCreationTimestamp="2026-02-18 00:45:36 +0000 UTC" firstStartedPulling="2026-02-18 00:45:37.599594247 +0000 UTC m=+690.905430979" lastFinishedPulling="2026-02-18 00:45:50.863418435 +0000 UTC m=+704.169255167" observedRunningTime="2026-02-18 00:45:51.091269949 +0000 UTC m=+704.397106691" watchObservedRunningTime="2026-02-18 00:45:51.093864568 +0000 UTC m=+704.399701300" Feb 18 00:45:55 crc kubenswrapper[4858]: I0218 00:45:55.265260 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:45:55 crc kubenswrapper[4858]: I0218 00:45:55.265642 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:46:25 crc kubenswrapper[4858]: I0218 00:46:25.265053 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:46:25 crc kubenswrapper[4858]: I0218 00:46:25.265657 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:46:30 crc kubenswrapper[4858]: I0218 00:46:30.241776 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77"] Feb 18 00:46:30 crc kubenswrapper[4858]: I0218 00:46:30.243433 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77" Feb 18 00:46:30 crc kubenswrapper[4858]: I0218 00:46:30.245874 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 00:46:30 crc kubenswrapper[4858]: I0218 00:46:30.253995 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77"] Feb 18 00:46:30 crc kubenswrapper[4858]: I0218 00:46:30.372428 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f984786a-760f-4fa7-91fb-6e1b447db492-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77\" (UID: \"f984786a-760f-4fa7-91fb-6e1b447db492\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77" Feb 18 00:46:30 crc kubenswrapper[4858]: I0218 00:46:30.372530 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f984786a-760f-4fa7-91fb-6e1b447db492-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77\" (UID: \"f984786a-760f-4fa7-91fb-6e1b447db492\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77" Feb 18 00:46:30 crc kubenswrapper[4858]: I0218 00:46:30.372599 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nt77\" (UniqueName: \"kubernetes.io/projected/f984786a-760f-4fa7-91fb-6e1b447db492-kube-api-access-4nt77\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77\" (UID: \"f984786a-760f-4fa7-91fb-6e1b447db492\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77" Feb 18 00:46:30 crc kubenswrapper[4858]: I0218 00:46:30.473953 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f984786a-760f-4fa7-91fb-6e1b447db492-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77\" (UID: \"f984786a-760f-4fa7-91fb-6e1b447db492\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77" Feb 18 00:46:30 crc kubenswrapper[4858]: I0218 00:46:30.474075 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f984786a-760f-4fa7-91fb-6e1b447db492-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77\" (UID: \"f984786a-760f-4fa7-91fb-6e1b447db492\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77" Feb 18 00:46:30 crc kubenswrapper[4858]: I0218 00:46:30.474137 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nt77\" (UniqueName: \"kubernetes.io/projected/f984786a-760f-4fa7-91fb-6e1b447db492-kube-api-access-4nt77\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77\" (UID: \"f984786a-760f-4fa7-91fb-6e1b447db492\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77" Feb 18 00:46:30 crc kubenswrapper[4858]: I0218 00:46:30.474695 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f984786a-760f-4fa7-91fb-6e1b447db492-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77\" (UID: \"f984786a-760f-4fa7-91fb-6e1b447db492\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77" Feb 18 00:46:30 crc kubenswrapper[4858]: I0218 00:46:30.474769 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f984786a-760f-4fa7-91fb-6e1b447db492-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77\" (UID: \"f984786a-760f-4fa7-91fb-6e1b447db492\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77" Feb 18 00:46:30 crc kubenswrapper[4858]: I0218 00:46:30.504272 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nt77\" (UniqueName: \"kubernetes.io/projected/f984786a-760f-4fa7-91fb-6e1b447db492-kube-api-access-4nt77\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77\" (UID: \"f984786a-760f-4fa7-91fb-6e1b447db492\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77" Feb 18 00:46:30 crc kubenswrapper[4858]: I0218 00:46:30.569376 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77" Feb 18 00:46:30 crc kubenswrapper[4858]: I0218 00:46:30.835818 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77"] Feb 18 00:46:31 crc kubenswrapper[4858]: I0218 00:46:31.455002 4858 generic.go:334] "Generic (PLEG): container finished" podID="f984786a-760f-4fa7-91fb-6e1b447db492" containerID="908f571cec1076a8f483afe58511a90b4b3291c15087bd16e71ee305cc36b170" exitCode=0 Feb 18 00:46:31 crc kubenswrapper[4858]: I0218 00:46:31.455064 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77" event={"ID":"f984786a-760f-4fa7-91fb-6e1b447db492","Type":"ContainerDied","Data":"908f571cec1076a8f483afe58511a90b4b3291c15087bd16e71ee305cc36b170"} Feb 18 00:46:31 crc kubenswrapper[4858]: I0218 00:46:31.455105 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77" event={"ID":"f984786a-760f-4fa7-91fb-6e1b447db492","Type":"ContainerStarted","Data":"f97c68cbb48f74362c06e758ad97e01fb1ef10fa02cddd7fe9c7fe446cfbe38c"} Feb 18 00:46:33 crc kubenswrapper[4858]: I0218 00:46:33.474303 4858 generic.go:334] "Generic (PLEG): container finished" podID="f984786a-760f-4fa7-91fb-6e1b447db492" containerID="3976dd90598bde7f2aaefac475219b041606aff527f86af4e5aceeb6db40de8c" exitCode=0 Feb 18 00:46:33 crc kubenswrapper[4858]: I0218 00:46:33.474545 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77" event={"ID":"f984786a-760f-4fa7-91fb-6e1b447db492","Type":"ContainerDied","Data":"3976dd90598bde7f2aaefac475219b041606aff527f86af4e5aceeb6db40de8c"} Feb 18 00:46:34 crc kubenswrapper[4858]: I0218 00:46:34.485296 4858 generic.go:334] "Generic (PLEG): container finished" podID="f984786a-760f-4fa7-91fb-6e1b447db492" containerID="faab4fd7e586708c206d5b1359422195080c3a42abc31c2ce70ae691cac11d16" exitCode=0 Feb 18 00:46:34 crc kubenswrapper[4858]: I0218 00:46:34.485418 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77" event={"ID":"f984786a-760f-4fa7-91fb-6e1b447db492","Type":"ContainerDied","Data":"faab4fd7e586708c206d5b1359422195080c3a42abc31c2ce70ae691cac11d16"} Feb 18 00:46:35 crc kubenswrapper[4858]: I0218 00:46:35.796488 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77" Feb 18 00:46:35 crc kubenswrapper[4858]: I0218 00:46:35.951339 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nt77\" (UniqueName: \"kubernetes.io/projected/f984786a-760f-4fa7-91fb-6e1b447db492-kube-api-access-4nt77\") pod \"f984786a-760f-4fa7-91fb-6e1b447db492\" (UID: \"f984786a-760f-4fa7-91fb-6e1b447db492\") " Feb 18 00:46:35 crc kubenswrapper[4858]: I0218 00:46:35.951424 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f984786a-760f-4fa7-91fb-6e1b447db492-util\") pod \"f984786a-760f-4fa7-91fb-6e1b447db492\" (UID: \"f984786a-760f-4fa7-91fb-6e1b447db492\") " Feb 18 00:46:35 crc kubenswrapper[4858]: I0218 00:46:35.951475 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f984786a-760f-4fa7-91fb-6e1b447db492-bundle\") pod \"f984786a-760f-4fa7-91fb-6e1b447db492\" (UID: \"f984786a-760f-4fa7-91fb-6e1b447db492\") " Feb 18 00:46:35 crc kubenswrapper[4858]: I0218 00:46:35.952393 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f984786a-760f-4fa7-91fb-6e1b447db492-bundle" (OuterVolumeSpecName: "bundle") pod "f984786a-760f-4fa7-91fb-6e1b447db492" (UID: "f984786a-760f-4fa7-91fb-6e1b447db492"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:46:35 crc kubenswrapper[4858]: I0218 00:46:35.957917 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f984786a-760f-4fa7-91fb-6e1b447db492-kube-api-access-4nt77" (OuterVolumeSpecName: "kube-api-access-4nt77") pod "f984786a-760f-4fa7-91fb-6e1b447db492" (UID: "f984786a-760f-4fa7-91fb-6e1b447db492"). InnerVolumeSpecName "kube-api-access-4nt77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:46:35 crc kubenswrapper[4858]: I0218 00:46:35.981260 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f984786a-760f-4fa7-91fb-6e1b447db492-util" (OuterVolumeSpecName: "util") pod "f984786a-760f-4fa7-91fb-6e1b447db492" (UID: "f984786a-760f-4fa7-91fb-6e1b447db492"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:46:36 crc kubenswrapper[4858]: I0218 00:46:36.053085 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nt77\" (UniqueName: \"kubernetes.io/projected/f984786a-760f-4fa7-91fb-6e1b447db492-kube-api-access-4nt77\") on node \"crc\" DevicePath \"\"" Feb 18 00:46:36 crc kubenswrapper[4858]: I0218 00:46:36.053143 4858 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f984786a-760f-4fa7-91fb-6e1b447db492-util\") on node \"crc\" DevicePath \"\"" Feb 18 00:46:36 crc kubenswrapper[4858]: I0218 00:46:36.053162 4858 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f984786a-760f-4fa7-91fb-6e1b447db492-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:46:36 crc kubenswrapper[4858]: I0218 00:46:36.505827 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77" event={"ID":"f984786a-760f-4fa7-91fb-6e1b447db492","Type":"ContainerDied","Data":"f97c68cbb48f74362c06e758ad97e01fb1ef10fa02cddd7fe9c7fe446cfbe38c"} Feb 18 00:46:36 crc kubenswrapper[4858]: I0218 00:46:36.505881 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f97c68cbb48f74362c06e758ad97e01fb1ef10fa02cddd7fe9c7fe446cfbe38c" Feb 18 00:46:36 crc kubenswrapper[4858]: I0218 00:46:36.505915 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77" Feb 18 00:46:39 crc kubenswrapper[4858]: I0218 00:46:39.780108 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-wq6f6"] Feb 18 00:46:39 crc kubenswrapper[4858]: E0218 00:46:39.782021 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f984786a-760f-4fa7-91fb-6e1b447db492" containerName="extract" Feb 18 00:46:39 crc kubenswrapper[4858]: I0218 00:46:39.782166 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f984786a-760f-4fa7-91fb-6e1b447db492" containerName="extract" Feb 18 00:46:39 crc kubenswrapper[4858]: E0218 00:46:39.782302 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f984786a-760f-4fa7-91fb-6e1b447db492" containerName="util" Feb 18 00:46:39 crc kubenswrapper[4858]: I0218 00:46:39.782417 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f984786a-760f-4fa7-91fb-6e1b447db492" containerName="util" Feb 18 00:46:39 crc kubenswrapper[4858]: E0218 00:46:39.782558 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f984786a-760f-4fa7-91fb-6e1b447db492" containerName="pull" Feb 18 00:46:39 crc kubenswrapper[4858]: I0218 00:46:39.782665 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f984786a-760f-4fa7-91fb-6e1b447db492" containerName="pull" Feb 18 00:46:39 crc kubenswrapper[4858]: I0218 00:46:39.782955 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f984786a-760f-4fa7-91fb-6e1b447db492" containerName="extract" Feb 18 00:46:39 crc kubenswrapper[4858]: I0218 00:46:39.783686 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-wq6f6" Feb 18 00:46:39 crc kubenswrapper[4858]: I0218 00:46:39.787873 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 18 00:46:39 crc kubenswrapper[4858]: I0218 00:46:39.788172 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-6qjcp" Feb 18 00:46:39 crc kubenswrapper[4858]: I0218 00:46:39.790147 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 18 00:46:39 crc kubenswrapper[4858]: I0218 00:46:39.804723 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-wq6f6"] Feb 18 00:46:39 crc kubenswrapper[4858]: I0218 00:46:39.900762 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckw27\" (UniqueName: \"kubernetes.io/projected/3d9133a3-024f-4621-a1e2-c7393b87df23-kube-api-access-ckw27\") pod \"nmstate-operator-694c9596b7-wq6f6\" (UID: \"3d9133a3-024f-4621-a1e2-c7393b87df23\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-wq6f6" Feb 18 00:46:40 crc kubenswrapper[4858]: I0218 00:46:40.004286 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckw27\" (UniqueName: \"kubernetes.io/projected/3d9133a3-024f-4621-a1e2-c7393b87df23-kube-api-access-ckw27\") pod \"nmstate-operator-694c9596b7-wq6f6\" (UID: \"3d9133a3-024f-4621-a1e2-c7393b87df23\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-wq6f6" Feb 18 00:46:40 crc kubenswrapper[4858]: I0218 00:46:40.036902 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckw27\" (UniqueName: \"kubernetes.io/projected/3d9133a3-024f-4621-a1e2-c7393b87df23-kube-api-access-ckw27\") pod \"nmstate-operator-694c9596b7-wq6f6\" (UID: \"3d9133a3-024f-4621-a1e2-c7393b87df23\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-wq6f6" Feb 18 00:46:40 crc kubenswrapper[4858]: I0218 00:46:40.104746 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-wq6f6" Feb 18 00:46:40 crc kubenswrapper[4858]: I0218 00:46:40.339047 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-wq6f6"] Feb 18 00:46:40 crc kubenswrapper[4858]: I0218 00:46:40.536031 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-wq6f6" event={"ID":"3d9133a3-024f-4621-a1e2-c7393b87df23","Type":"ContainerStarted","Data":"817e3eb52e6a7bf1053c84d82200eebc6d8e45b8db96a32d1dd009435ed1a20d"} Feb 18 00:46:43 crc kubenswrapper[4858]: I0218 00:46:43.555128 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-wq6f6" event={"ID":"3d9133a3-024f-4621-a1e2-c7393b87df23","Type":"ContainerStarted","Data":"f25a8494e62ff22357f8f81fc4c0a8a39685d0200cf498c3c33d330c37217798"} Feb 18 00:46:43 crc kubenswrapper[4858]: I0218 00:46:43.573015 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-wq6f6" podStartSLOduration=2.061375172 podStartE2EDuration="4.572994926s" podCreationTimestamp="2026-02-18 00:46:39 +0000 UTC" firstStartedPulling="2026-02-18 00:46:40.355173288 +0000 UTC m=+753.661010020" lastFinishedPulling="2026-02-18 00:46:42.866793042 +0000 UTC m=+756.172629774" observedRunningTime="2026-02-18 00:46:43.569092952 +0000 UTC m=+756.874929694" watchObservedRunningTime="2026-02-18 00:46:43.572994926 +0000 UTC m=+756.878831658" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.568930 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-jsdd7"] Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.570222 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-jsdd7" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.572895 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-4sh46" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.588359 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-jsdd7"] Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.597281 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-6nkwp"] Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.598380 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6nkwp" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.601689 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.623300 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-6nkwp"] Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.635992 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-gjmb7"] Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.637105 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-gjmb7" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.674277 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9ccccd6f-f4c0-4948-a851-e837f10702c3-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-6nkwp\" (UID: \"9ccccd6f-f4c0-4948-a851-e837f10702c3\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6nkwp" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.674334 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbpfm\" (UniqueName: \"kubernetes.io/projected/897ba371-53cf-440a-9045-2d45bfae9032-kube-api-access-vbpfm\") pod \"nmstate-metrics-58c85c668d-jsdd7\" (UID: \"897ba371-53cf-440a-9045-2d45bfae9032\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-jsdd7" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.674352 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c83e1b85-4bb0-47f8-b152-a5f5c34cc919-nmstate-lock\") pod \"nmstate-handler-gjmb7\" (UID: \"c83e1b85-4bb0-47f8-b152-a5f5c34cc919\") " pod="openshift-nmstate/nmstate-handler-gjmb7" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.674523 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c83e1b85-4bb0-47f8-b152-a5f5c34cc919-dbus-socket\") pod \"nmstate-handler-gjmb7\" (UID: \"c83e1b85-4bb0-47f8-b152-a5f5c34cc919\") " pod="openshift-nmstate/nmstate-handler-gjmb7" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.674578 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c83e1b85-4bb0-47f8-b152-a5f5c34cc919-ovs-socket\") pod \"nmstate-handler-gjmb7\" (UID: \"c83e1b85-4bb0-47f8-b152-a5f5c34cc919\") " pod="openshift-nmstate/nmstate-handler-gjmb7" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.674624 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq292\" (UniqueName: \"kubernetes.io/projected/c83e1b85-4bb0-47f8-b152-a5f5c34cc919-kube-api-access-mq292\") pod \"nmstate-handler-gjmb7\" (UID: \"c83e1b85-4bb0-47f8-b152-a5f5c34cc919\") " pod="openshift-nmstate/nmstate-handler-gjmb7" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.674663 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cts77\" (UniqueName: \"kubernetes.io/projected/9ccccd6f-f4c0-4948-a851-e837f10702c3-kube-api-access-cts77\") pod \"nmstate-webhook-866bcb46dc-6nkwp\" (UID: \"9ccccd6f-f4c0-4948-a851-e837f10702c3\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6nkwp" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.719401 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfxkk"] Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.720276 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfxkk" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.723398 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.723428 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.723410 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-6d2hz" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.736296 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfxkk"] Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.776158 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/95ad9559-743e-4d16-8dba-6cea830de767-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-sfxkk\" (UID: \"95ad9559-743e-4d16-8dba-6cea830de767\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfxkk" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.776210 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbpfm\" (UniqueName: \"kubernetes.io/projected/897ba371-53cf-440a-9045-2d45bfae9032-kube-api-access-vbpfm\") pod \"nmstate-metrics-58c85c668d-jsdd7\" (UID: \"897ba371-53cf-440a-9045-2d45bfae9032\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-jsdd7" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.776228 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c83e1b85-4bb0-47f8-b152-a5f5c34cc919-nmstate-lock\") pod \"nmstate-handler-gjmb7\" (UID: \"c83e1b85-4bb0-47f8-b152-a5f5c34cc919\") " pod="openshift-nmstate/nmstate-handler-gjmb7" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.776249 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c83e1b85-4bb0-47f8-b152-a5f5c34cc919-dbus-socket\") pod \"nmstate-handler-gjmb7\" (UID: \"c83e1b85-4bb0-47f8-b152-a5f5c34cc919\") " pod="openshift-nmstate/nmstate-handler-gjmb7" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.776267 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdpvd\" (UniqueName: \"kubernetes.io/projected/95ad9559-743e-4d16-8dba-6cea830de767-kube-api-access-hdpvd\") pod \"nmstate-console-plugin-5c78fc5d65-sfxkk\" (UID: \"95ad9559-743e-4d16-8dba-6cea830de767\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfxkk" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.776290 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c83e1b85-4bb0-47f8-b152-a5f5c34cc919-ovs-socket\") pod \"nmstate-handler-gjmb7\" (UID: \"c83e1b85-4bb0-47f8-b152-a5f5c34cc919\") " pod="openshift-nmstate/nmstate-handler-gjmb7" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.776319 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq292\" (UniqueName: \"kubernetes.io/projected/c83e1b85-4bb0-47f8-b152-a5f5c34cc919-kube-api-access-mq292\") pod \"nmstate-handler-gjmb7\" (UID: \"c83e1b85-4bb0-47f8-b152-a5f5c34cc919\") " pod="openshift-nmstate/nmstate-handler-gjmb7" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.776340 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/95ad9559-743e-4d16-8dba-6cea830de767-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-sfxkk\" (UID: \"95ad9559-743e-4d16-8dba-6cea830de767\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfxkk" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.776365 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cts77\" (UniqueName: \"kubernetes.io/projected/9ccccd6f-f4c0-4948-a851-e837f10702c3-kube-api-access-cts77\") pod \"nmstate-webhook-866bcb46dc-6nkwp\" (UID: \"9ccccd6f-f4c0-4948-a851-e837f10702c3\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6nkwp" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.776394 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9ccccd6f-f4c0-4948-a851-e837f10702c3-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-6nkwp\" (UID: \"9ccccd6f-f4c0-4948-a851-e837f10702c3\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6nkwp" Feb 18 00:46:44 crc kubenswrapper[4858]: E0218 00:46:44.776490 4858 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 18 00:46:44 crc kubenswrapper[4858]: E0218 00:46:44.776551 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ccccd6f-f4c0-4948-a851-e837f10702c3-tls-key-pair podName:9ccccd6f-f4c0-4948-a851-e837f10702c3 nodeName:}" failed. No retries permitted until 2026-02-18 00:46:45.276534081 +0000 UTC m=+758.582370813 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/9ccccd6f-f4c0-4948-a851-e837f10702c3-tls-key-pair") pod "nmstate-webhook-866bcb46dc-6nkwp" (UID: "9ccccd6f-f4c0-4948-a851-e837f10702c3") : secret "openshift-nmstate-webhook" not found Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.777304 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c83e1b85-4bb0-47f8-b152-a5f5c34cc919-nmstate-lock\") pod \"nmstate-handler-gjmb7\" (UID: \"c83e1b85-4bb0-47f8-b152-a5f5c34cc919\") " pod="openshift-nmstate/nmstate-handler-gjmb7" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.777853 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c83e1b85-4bb0-47f8-b152-a5f5c34cc919-dbus-socket\") pod \"nmstate-handler-gjmb7\" (UID: \"c83e1b85-4bb0-47f8-b152-a5f5c34cc919\") " pod="openshift-nmstate/nmstate-handler-gjmb7" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.777890 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c83e1b85-4bb0-47f8-b152-a5f5c34cc919-ovs-socket\") pod \"nmstate-handler-gjmb7\" (UID: \"c83e1b85-4bb0-47f8-b152-a5f5c34cc919\") " pod="openshift-nmstate/nmstate-handler-gjmb7" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.799858 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cts77\" (UniqueName: \"kubernetes.io/projected/9ccccd6f-f4c0-4948-a851-e837f10702c3-kube-api-access-cts77\") pod \"nmstate-webhook-866bcb46dc-6nkwp\" (UID: \"9ccccd6f-f4c0-4948-a851-e837f10702c3\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6nkwp" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.800215 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbpfm\" (UniqueName: \"kubernetes.io/projected/897ba371-53cf-440a-9045-2d45bfae9032-kube-api-access-vbpfm\") pod \"nmstate-metrics-58c85c668d-jsdd7\" (UID: \"897ba371-53cf-440a-9045-2d45bfae9032\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-jsdd7" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.818133 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq292\" (UniqueName: \"kubernetes.io/projected/c83e1b85-4bb0-47f8-b152-a5f5c34cc919-kube-api-access-mq292\") pod \"nmstate-handler-gjmb7\" (UID: \"c83e1b85-4bb0-47f8-b152-a5f5c34cc919\") " pod="openshift-nmstate/nmstate-handler-gjmb7" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.877794 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdpvd\" (UniqueName: \"kubernetes.io/projected/95ad9559-743e-4d16-8dba-6cea830de767-kube-api-access-hdpvd\") pod \"nmstate-console-plugin-5c78fc5d65-sfxkk\" (UID: \"95ad9559-743e-4d16-8dba-6cea830de767\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfxkk" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.877864 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/95ad9559-743e-4d16-8dba-6cea830de767-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-sfxkk\" (UID: \"95ad9559-743e-4d16-8dba-6cea830de767\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfxkk" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.877920 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/95ad9559-743e-4d16-8dba-6cea830de767-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-sfxkk\" (UID: \"95ad9559-743e-4d16-8dba-6cea830de767\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfxkk" Feb 18 00:46:44 crc kubenswrapper[4858]: E0218 00:46:44.878001 4858 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 18 00:46:44 crc kubenswrapper[4858]: E0218 00:46:44.878039 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95ad9559-743e-4d16-8dba-6cea830de767-plugin-serving-cert podName:95ad9559-743e-4d16-8dba-6cea830de767 nodeName:}" failed. No retries permitted until 2026-02-18 00:46:45.378025861 +0000 UTC m=+758.683862593 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/95ad9559-743e-4d16-8dba-6cea830de767-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-sfxkk" (UID: "95ad9559-743e-4d16-8dba-6cea830de767") : secret "plugin-serving-cert" not found Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.878798 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/95ad9559-743e-4d16-8dba-6cea830de767-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-sfxkk\" (UID: \"95ad9559-743e-4d16-8dba-6cea830de767\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfxkk" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.885539 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-jsdd7" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.897373 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdpvd\" (UniqueName: \"kubernetes.io/projected/95ad9559-743e-4d16-8dba-6cea830de767-kube-api-access-hdpvd\") pod \"nmstate-console-plugin-5c78fc5d65-sfxkk\" (UID: \"95ad9559-743e-4d16-8dba-6cea830de767\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfxkk" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.916531 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6df979cf4-kxwdv"] Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.917223 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.919971 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6df979cf4-kxwdv"] Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.960538 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-gjmb7" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.979523 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/eeca3542-a3d9-461e-904f-db09c8549564-console-config\") pod \"console-6df979cf4-kxwdv\" (UID: \"eeca3542-a3d9-461e-904f-db09c8549564\") " pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.979561 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eeca3542-a3d9-461e-904f-db09c8549564-service-ca\") pod \"console-6df979cf4-kxwdv\" (UID: \"eeca3542-a3d9-461e-904f-db09c8549564\") " pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.979597 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/eeca3542-a3d9-461e-904f-db09c8549564-console-serving-cert\") pod \"console-6df979cf4-kxwdv\" (UID: \"eeca3542-a3d9-461e-904f-db09c8549564\") " pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.979727 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/eeca3542-a3d9-461e-904f-db09c8549564-oauth-serving-cert\") pod \"console-6df979cf4-kxwdv\" (UID: \"eeca3542-a3d9-461e-904f-db09c8549564\") " pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.979795 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/eeca3542-a3d9-461e-904f-db09c8549564-console-oauth-config\") pod \"console-6df979cf4-kxwdv\" (UID: \"eeca3542-a3d9-461e-904f-db09c8549564\") " pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.979839 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shxw9\" (UniqueName: \"kubernetes.io/projected/eeca3542-a3d9-461e-904f-db09c8549564-kube-api-access-shxw9\") pod \"console-6df979cf4-kxwdv\" (UID: \"eeca3542-a3d9-461e-904f-db09c8549564\") " pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:44 crc kubenswrapper[4858]: I0218 00:46:44.979885 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eeca3542-a3d9-461e-904f-db09c8549564-trusted-ca-bundle\") pod \"console-6df979cf4-kxwdv\" (UID: \"eeca3542-a3d9-461e-904f-db09c8549564\") " pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:44 crc kubenswrapper[4858]: W0218 00:46:44.986900 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc83e1b85_4bb0_47f8_b152_a5f5c34cc919.slice/crio-4b6f4c1224b8ca74f947c1c09e40d5fbf3da041d3aaa879805b12b0d42642555 WatchSource:0}: Error finding container 4b6f4c1224b8ca74f947c1c09e40d5fbf3da041d3aaa879805b12b0d42642555: Status 404 returned error can't find the container with id 4b6f4c1224b8ca74f947c1c09e40d5fbf3da041d3aaa879805b12b0d42642555 Feb 18 00:46:45 crc kubenswrapper[4858]: I0218 00:46:45.081008 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/eeca3542-a3d9-461e-904f-db09c8549564-console-config\") pod \"console-6df979cf4-kxwdv\" (UID: \"eeca3542-a3d9-461e-904f-db09c8549564\") " pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:45 crc kubenswrapper[4858]: I0218 00:46:45.081056 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eeca3542-a3d9-461e-904f-db09c8549564-service-ca\") pod \"console-6df979cf4-kxwdv\" (UID: \"eeca3542-a3d9-461e-904f-db09c8549564\") " pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:45 crc kubenswrapper[4858]: I0218 00:46:45.081100 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/eeca3542-a3d9-461e-904f-db09c8549564-console-serving-cert\") pod \"console-6df979cf4-kxwdv\" (UID: \"eeca3542-a3d9-461e-904f-db09c8549564\") " pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:45 crc kubenswrapper[4858]: I0218 00:46:45.081134 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/eeca3542-a3d9-461e-904f-db09c8549564-oauth-serving-cert\") pod \"console-6df979cf4-kxwdv\" (UID: \"eeca3542-a3d9-461e-904f-db09c8549564\") " pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:45 crc kubenswrapper[4858]: I0218 00:46:45.081167 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/eeca3542-a3d9-461e-904f-db09c8549564-console-oauth-config\") pod \"console-6df979cf4-kxwdv\" (UID: \"eeca3542-a3d9-461e-904f-db09c8549564\") " pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:45 crc kubenswrapper[4858]: I0218 00:46:45.081192 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shxw9\" (UniqueName: \"kubernetes.io/projected/eeca3542-a3d9-461e-904f-db09c8549564-kube-api-access-shxw9\") pod \"console-6df979cf4-kxwdv\" (UID: \"eeca3542-a3d9-461e-904f-db09c8549564\") " pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:45 crc kubenswrapper[4858]: I0218 00:46:45.081326 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eeca3542-a3d9-461e-904f-db09c8549564-trusted-ca-bundle\") pod \"console-6df979cf4-kxwdv\" (UID: \"eeca3542-a3d9-461e-904f-db09c8549564\") " pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:45 crc kubenswrapper[4858]: I0218 00:46:45.082171 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/eeca3542-a3d9-461e-904f-db09c8549564-console-config\") pod \"console-6df979cf4-kxwdv\" (UID: \"eeca3542-a3d9-461e-904f-db09c8549564\") " pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:45 crc kubenswrapper[4858]: I0218 00:46:45.082809 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eeca3542-a3d9-461e-904f-db09c8549564-trusted-ca-bundle\") pod \"console-6df979cf4-kxwdv\" (UID: \"eeca3542-a3d9-461e-904f-db09c8549564\") " pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:45 crc kubenswrapper[4858]: I0218 00:46:45.083102 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/eeca3542-a3d9-461e-904f-db09c8549564-oauth-serving-cert\") pod \"console-6df979cf4-kxwdv\" (UID: \"eeca3542-a3d9-461e-904f-db09c8549564\") " pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:45 crc kubenswrapper[4858]: I0218 00:46:45.084003 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eeca3542-a3d9-461e-904f-db09c8549564-service-ca\") pod \"console-6df979cf4-kxwdv\" (UID: \"eeca3542-a3d9-461e-904f-db09c8549564\") " pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:45 crc kubenswrapper[4858]: I0218 00:46:45.087764 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/eeca3542-a3d9-461e-904f-db09c8549564-console-serving-cert\") pod \"console-6df979cf4-kxwdv\" (UID: \"eeca3542-a3d9-461e-904f-db09c8549564\") " pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:45 crc kubenswrapper[4858]: I0218 00:46:45.088086 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/eeca3542-a3d9-461e-904f-db09c8549564-console-oauth-config\") pod \"console-6df979cf4-kxwdv\" (UID: \"eeca3542-a3d9-461e-904f-db09c8549564\") " pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:45 crc kubenswrapper[4858]: I0218 00:46:45.100352 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shxw9\" (UniqueName: \"kubernetes.io/projected/eeca3542-a3d9-461e-904f-db09c8549564-kube-api-access-shxw9\") pod \"console-6df979cf4-kxwdv\" (UID: \"eeca3542-a3d9-461e-904f-db09c8549564\") " pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:45 crc kubenswrapper[4858]: I0218 00:46:45.265660 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:45 crc kubenswrapper[4858]: I0218 00:46:45.285135 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9ccccd6f-f4c0-4948-a851-e837f10702c3-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-6nkwp\" (UID: \"9ccccd6f-f4c0-4948-a851-e837f10702c3\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6nkwp" Feb 18 00:46:45 crc kubenswrapper[4858]: I0218 00:46:45.290019 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9ccccd6f-f4c0-4948-a851-e837f10702c3-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-6nkwp\" (UID: \"9ccccd6f-f4c0-4948-a851-e837f10702c3\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6nkwp" Feb 18 00:46:45 crc kubenswrapper[4858]: I0218 00:46:45.385050 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-jsdd7"] Feb 18 00:46:45 crc kubenswrapper[4858]: I0218 00:46:45.386696 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/95ad9559-743e-4d16-8dba-6cea830de767-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-sfxkk\" (UID: \"95ad9559-743e-4d16-8dba-6cea830de767\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfxkk" Feb 18 00:46:45 crc kubenswrapper[4858]: I0218 00:46:45.391296 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/95ad9559-743e-4d16-8dba-6cea830de767-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-sfxkk\" (UID: \"95ad9559-743e-4d16-8dba-6cea830de767\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfxkk" Feb 18 00:46:45 crc kubenswrapper[4858]: I0218 00:46:45.522983 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6nkwp" Feb 18 00:46:45 crc kubenswrapper[4858]: I0218 00:46:45.569806 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-jsdd7" event={"ID":"897ba371-53cf-440a-9045-2d45bfae9032","Type":"ContainerStarted","Data":"1af6b456aa5995017bfd13dc798967782ca3eb6bd182ec9a3482000c21b8ebed"} Feb 18 00:46:45 crc kubenswrapper[4858]: I0218 00:46:45.571232 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-gjmb7" event={"ID":"c83e1b85-4bb0-47f8-b152-a5f5c34cc919","Type":"ContainerStarted","Data":"4b6f4c1224b8ca74f947c1c09e40d5fbf3da041d3aaa879805b12b0d42642555"} Feb 18 00:46:45 crc kubenswrapper[4858]: I0218 00:46:45.635791 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfxkk" Feb 18 00:46:45 crc kubenswrapper[4858]: I0218 00:46:45.723510 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6df979cf4-kxwdv"] Feb 18 00:46:45 crc kubenswrapper[4858]: I0218 00:46:45.857980 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-6nkwp"] Feb 18 00:46:45 crc kubenswrapper[4858]: W0218 00:46:45.877742 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ccccd6f_f4c0_4948_a851_e837f10702c3.slice/crio-fd07d3b77362bf24163b9b25da4d298270e2cd4877f70b35c84abbd6736aec59 WatchSource:0}: Error finding container fd07d3b77362bf24163b9b25da4d298270e2cd4877f70b35c84abbd6736aec59: Status 404 returned error can't find the container with id fd07d3b77362bf24163b9b25da4d298270e2cd4877f70b35c84abbd6736aec59 Feb 18 00:46:46 crc kubenswrapper[4858]: I0218 00:46:46.189684 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfxkk"] Feb 18 00:46:46 crc kubenswrapper[4858]: W0218 00:46:46.193923 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95ad9559_743e_4d16_8dba_6cea830de767.slice/crio-884a6702a3690bce9a9d6048b6ff34f78acc71e9949578b57bca3f2a214ae98a WatchSource:0}: Error finding container 884a6702a3690bce9a9d6048b6ff34f78acc71e9949578b57bca3f2a214ae98a: Status 404 returned error can't find the container with id 884a6702a3690bce9a9d6048b6ff34f78acc71e9949578b57bca3f2a214ae98a Feb 18 00:46:46 crc kubenswrapper[4858]: I0218 00:46:46.587255 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6df979cf4-kxwdv" event={"ID":"eeca3542-a3d9-461e-904f-db09c8549564","Type":"ContainerStarted","Data":"b58c955842d1696ac53f23382824565062ac499c31011aadb716d725dca54ec5"} Feb 18 00:46:46 crc kubenswrapper[4858]: I0218 00:46:46.587679 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6df979cf4-kxwdv" event={"ID":"eeca3542-a3d9-461e-904f-db09c8549564","Type":"ContainerStarted","Data":"0922ee82fea5e2846624c18a452a7e8d16d21dccc63e3af30e31e8f577fde9f0"} Feb 18 00:46:46 crc kubenswrapper[4858]: I0218 00:46:46.588935 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6nkwp" event={"ID":"9ccccd6f-f4c0-4948-a851-e837f10702c3","Type":"ContainerStarted","Data":"fd07d3b77362bf24163b9b25da4d298270e2cd4877f70b35c84abbd6736aec59"} Feb 18 00:46:46 crc kubenswrapper[4858]: I0218 00:46:46.590249 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfxkk" event={"ID":"95ad9559-743e-4d16-8dba-6cea830de767","Type":"ContainerStarted","Data":"884a6702a3690bce9a9d6048b6ff34f78acc71e9949578b57bca3f2a214ae98a"} Feb 18 00:46:46 crc kubenswrapper[4858]: I0218 00:46:46.612616 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6df979cf4-kxwdv" podStartSLOduration=2.612601519 podStartE2EDuration="2.612601519s" podCreationTimestamp="2026-02-18 00:46:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:46:46.60845168 +0000 UTC m=+759.914288412" watchObservedRunningTime="2026-02-18 00:46:46.612601519 +0000 UTC m=+759.918438251" Feb 18 00:46:46 crc kubenswrapper[4858]: I0218 00:46:46.730766 4858 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 00:46:48 crc kubenswrapper[4858]: I0218 00:46:48.648448 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6nkwp" event={"ID":"9ccccd6f-f4c0-4948-a851-e837f10702c3","Type":"ContainerStarted","Data":"791bedba206c73843756f922653ab94d573fb495a083d1f23164ea613801e6bc"} Feb 18 00:46:48 crc kubenswrapper[4858]: I0218 00:46:48.649302 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6nkwp" Feb 18 00:46:48 crc kubenswrapper[4858]: I0218 00:46:48.652112 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-jsdd7" event={"ID":"897ba371-53cf-440a-9045-2d45bfae9032","Type":"ContainerStarted","Data":"86d440589d03ec19ad7ff06361b264f2978927dc5ab51269c871de99c9f317ef"} Feb 18 00:46:48 crc kubenswrapper[4858]: I0218 00:46:48.670866 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6nkwp" podStartSLOduration=2.085347967 podStartE2EDuration="4.670839777s" podCreationTimestamp="2026-02-18 00:46:44 +0000 UTC" firstStartedPulling="2026-02-18 00:46:45.879821607 +0000 UTC m=+759.185658339" lastFinishedPulling="2026-02-18 00:46:48.465313417 +0000 UTC m=+761.771150149" observedRunningTime="2026-02-18 00:46:48.669071954 +0000 UTC m=+761.974908696" watchObservedRunningTime="2026-02-18 00:46:48.670839777 +0000 UTC m=+761.976676549" Feb 18 00:46:49 crc kubenswrapper[4858]: I0218 00:46:49.660524 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-gjmb7" event={"ID":"c83e1b85-4bb0-47f8-b152-a5f5c34cc919","Type":"ContainerStarted","Data":"97bd6dac0ca9c67b69164b0782055e5925cbe20fe70c6f86a776d995aede9f0c"} Feb 18 00:46:49 crc kubenswrapper[4858]: I0218 00:46:49.660876 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-gjmb7" Feb 18 00:46:49 crc kubenswrapper[4858]: I0218 00:46:49.682793 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-gjmb7" podStartSLOduration=2.232098013 podStartE2EDuration="5.682766937s" podCreationTimestamp="2026-02-18 00:46:44 +0000 UTC" firstStartedPulling="2026-02-18 00:46:44.98911602 +0000 UTC m=+758.294952752" lastFinishedPulling="2026-02-18 00:46:48.439784944 +0000 UTC m=+761.745621676" observedRunningTime="2026-02-18 00:46:49.679811166 +0000 UTC m=+762.985647898" watchObservedRunningTime="2026-02-18 00:46:49.682766937 +0000 UTC m=+762.988603689" Feb 18 00:46:50 crc kubenswrapper[4858]: I0218 00:46:50.669677 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfxkk" event={"ID":"95ad9559-743e-4d16-8dba-6cea830de767","Type":"ContainerStarted","Data":"b3e06f0884f338cb3969bcfdff0c2f34bb0ee74794046c07548885a060247c63"} Feb 18 00:46:50 crc kubenswrapper[4858]: I0218 00:46:50.728676 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfxkk" podStartSLOduration=3.26339173 podStartE2EDuration="6.728647843s" podCreationTimestamp="2026-02-18 00:46:44 +0000 UTC" firstStartedPulling="2026-02-18 00:46:46.19782021 +0000 UTC m=+759.503656942" lastFinishedPulling="2026-02-18 00:46:49.663076313 +0000 UTC m=+762.968913055" observedRunningTime="2026-02-18 00:46:50.725425766 +0000 UTC m=+764.031262508" watchObservedRunningTime="2026-02-18 00:46:50.728647843 +0000 UTC m=+764.034484575" Feb 18 00:46:52 crc kubenswrapper[4858]: I0218 00:46:52.688983 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-jsdd7" event={"ID":"897ba371-53cf-440a-9045-2d45bfae9032","Type":"ContainerStarted","Data":"a88a1b94a352d5588935a9fa9da89392583a19ee9434eeb331101e13e0e5cbe0"} Feb 18 00:46:52 crc kubenswrapper[4858]: I0218 00:46:52.714153 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-jsdd7" podStartSLOduration=2.299463823 podStartE2EDuration="8.714136972s" podCreationTimestamp="2026-02-18 00:46:44 +0000 UTC" firstStartedPulling="2026-02-18 00:46:45.404723129 +0000 UTC m=+758.710559871" lastFinishedPulling="2026-02-18 00:46:51.819396288 +0000 UTC m=+765.125233020" observedRunningTime="2026-02-18 00:46:52.709620154 +0000 UTC m=+766.015456886" watchObservedRunningTime="2026-02-18 00:46:52.714136972 +0000 UTC m=+766.019973704" Feb 18 00:46:55 crc kubenswrapper[4858]: I0218 00:46:55.004033 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-gjmb7" Feb 18 00:46:55 crc kubenswrapper[4858]: I0218 00:46:55.264937 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:46:55 crc kubenswrapper[4858]: I0218 00:46:55.265029 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:46:55 crc kubenswrapper[4858]: I0218 00:46:55.265092 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 00:46:55 crc kubenswrapper[4858]: I0218 00:46:55.265923 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"010ce5804c87e4080adda85e7efd663cb5c56ffcff1160985ac0b6c58a57396c"} pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:46:55 crc kubenswrapper[4858]: I0218 00:46:55.266031 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" containerID="cri-o://010ce5804c87e4080adda85e7efd663cb5c56ffcff1160985ac0b6c58a57396c" gracePeriod=600 Feb 18 00:46:55 crc kubenswrapper[4858]: I0218 00:46:55.266214 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:55 crc kubenswrapper[4858]: I0218 00:46:55.266288 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:55 crc kubenswrapper[4858]: I0218 00:46:55.277138 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:55 crc kubenswrapper[4858]: I0218 00:46:55.715048 4858 generic.go:334] "Generic (PLEG): container finished" podID="7172df49-6116-4968-a2b5-a1afb116568b" containerID="010ce5804c87e4080adda85e7efd663cb5c56ffcff1160985ac0b6c58a57396c" exitCode=0 Feb 18 00:46:55 crc kubenswrapper[4858]: I0218 00:46:55.715125 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerDied","Data":"010ce5804c87e4080adda85e7efd663cb5c56ffcff1160985ac0b6c58a57396c"} Feb 18 00:46:55 crc kubenswrapper[4858]: I0218 00:46:55.715743 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerStarted","Data":"4232da30acbadfd7cea82898dfbb989b7b4cf3f5d440e264df04ddfd7051cdf7"} Feb 18 00:46:55 crc kubenswrapper[4858]: I0218 00:46:55.715787 4858 scope.go:117] "RemoveContainer" containerID="8c79037d14a94a75ba333833fe3bdef198d4e97042dfa17705bf757bc2a57baf" Feb 18 00:46:55 crc kubenswrapper[4858]: I0218 00:46:55.722219 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6df979cf4-kxwdv" Feb 18 00:46:55 crc kubenswrapper[4858]: I0218 00:46:55.815994 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-lpg4n"] Feb 18 00:47:05 crc kubenswrapper[4858]: I0218 00:47:05.533008 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6nkwp" Feb 18 00:47:20 crc kubenswrapper[4858]: I0218 00:47:20.881718 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-lpg4n" podUID="a82bb6ce-4801-417a-a4e2-93d1667999ee" containerName="console" containerID="cri-o://e1db85881b9dde3bc9c60fed94b1a62a689493603329b376124848c707df4108" gracePeriod=15 Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.385409 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-lpg4n_a82bb6ce-4801-417a-a4e2-93d1667999ee/console/0.log" Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.385751 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.449068 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkrps\" (UniqueName: \"kubernetes.io/projected/a82bb6ce-4801-417a-a4e2-93d1667999ee-kube-api-access-fkrps\") pod \"a82bb6ce-4801-417a-a4e2-93d1667999ee\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.449120 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a82bb6ce-4801-417a-a4e2-93d1667999ee-oauth-serving-cert\") pod \"a82bb6ce-4801-417a-a4e2-93d1667999ee\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.449202 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a82bb6ce-4801-417a-a4e2-93d1667999ee-console-serving-cert\") pod \"a82bb6ce-4801-417a-a4e2-93d1667999ee\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.449230 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a82bb6ce-4801-417a-a4e2-93d1667999ee-console-oauth-config\") pod \"a82bb6ce-4801-417a-a4e2-93d1667999ee\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.449260 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a82bb6ce-4801-417a-a4e2-93d1667999ee-trusted-ca-bundle\") pod \"a82bb6ce-4801-417a-a4e2-93d1667999ee\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.449288 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a82bb6ce-4801-417a-a4e2-93d1667999ee-console-config\") pod \"a82bb6ce-4801-417a-a4e2-93d1667999ee\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.449349 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a82bb6ce-4801-417a-a4e2-93d1667999ee-service-ca\") pod \"a82bb6ce-4801-417a-a4e2-93d1667999ee\" (UID: \"a82bb6ce-4801-417a-a4e2-93d1667999ee\") " Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.450018 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a82bb6ce-4801-417a-a4e2-93d1667999ee-console-config" (OuterVolumeSpecName: "console-config") pod "a82bb6ce-4801-417a-a4e2-93d1667999ee" (UID: "a82bb6ce-4801-417a-a4e2-93d1667999ee"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.450086 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a82bb6ce-4801-417a-a4e2-93d1667999ee-service-ca" (OuterVolumeSpecName: "service-ca") pod "a82bb6ce-4801-417a-a4e2-93d1667999ee" (UID: "a82bb6ce-4801-417a-a4e2-93d1667999ee"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.450096 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a82bb6ce-4801-417a-a4e2-93d1667999ee-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a82bb6ce-4801-417a-a4e2-93d1667999ee" (UID: "a82bb6ce-4801-417a-a4e2-93d1667999ee"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.450362 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a82bb6ce-4801-417a-a4e2-93d1667999ee-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a82bb6ce-4801-417a-a4e2-93d1667999ee" (UID: "a82bb6ce-4801-417a-a4e2-93d1667999ee"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.464721 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82bb6ce-4801-417a-a4e2-93d1667999ee-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a82bb6ce-4801-417a-a4e2-93d1667999ee" (UID: "a82bb6ce-4801-417a-a4e2-93d1667999ee"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.465066 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a82bb6ce-4801-417a-a4e2-93d1667999ee-kube-api-access-fkrps" (OuterVolumeSpecName: "kube-api-access-fkrps") pod "a82bb6ce-4801-417a-a4e2-93d1667999ee" (UID: "a82bb6ce-4801-417a-a4e2-93d1667999ee"). InnerVolumeSpecName "kube-api-access-fkrps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.473960 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82bb6ce-4801-417a-a4e2-93d1667999ee-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a82bb6ce-4801-417a-a4e2-93d1667999ee" (UID: "a82bb6ce-4801-417a-a4e2-93d1667999ee"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.550433 4858 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a82bb6ce-4801-417a-a4e2-93d1667999ee-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.550471 4858 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a82bb6ce-4801-417a-a4e2-93d1667999ee-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.550483 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a82bb6ce-4801-417a-a4e2-93d1667999ee-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.550513 4858 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a82bb6ce-4801-417a-a4e2-93d1667999ee-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.550524 4858 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a82bb6ce-4801-417a-a4e2-93d1667999ee-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.550536 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkrps\" (UniqueName: \"kubernetes.io/projected/a82bb6ce-4801-417a-a4e2-93d1667999ee-kube-api-access-fkrps\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.550549 4858 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a82bb6ce-4801-417a-a4e2-93d1667999ee-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.921284 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-lpg4n_a82bb6ce-4801-417a-a4e2-93d1667999ee/console/0.log" Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.921340 4858 generic.go:334] "Generic (PLEG): container finished" podID="a82bb6ce-4801-417a-a4e2-93d1667999ee" containerID="e1db85881b9dde3bc9c60fed94b1a62a689493603329b376124848c707df4108" exitCode=2 Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.921372 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-lpg4n" event={"ID":"a82bb6ce-4801-417a-a4e2-93d1667999ee","Type":"ContainerDied","Data":"e1db85881b9dde3bc9c60fed94b1a62a689493603329b376124848c707df4108"} Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.921409 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-lpg4n" event={"ID":"a82bb6ce-4801-417a-a4e2-93d1667999ee","Type":"ContainerDied","Data":"b7f1f89d9e0269667a5d99f9d919a6e5d14403ba3a2e94ef02d07c901f1edb0a"} Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.921423 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-lpg4n" Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.921430 4858 scope.go:117] "RemoveContainer" containerID="e1db85881b9dde3bc9c60fed94b1a62a689493603329b376124848c707df4108" Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.948079 4858 scope.go:117] "RemoveContainer" containerID="e1db85881b9dde3bc9c60fed94b1a62a689493603329b376124848c707df4108" Feb 18 00:47:21 crc kubenswrapper[4858]: E0218 00:47:21.948511 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1db85881b9dde3bc9c60fed94b1a62a689493603329b376124848c707df4108\": container with ID starting with e1db85881b9dde3bc9c60fed94b1a62a689493603329b376124848c707df4108 not found: ID does not exist" containerID="e1db85881b9dde3bc9c60fed94b1a62a689493603329b376124848c707df4108" Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.948545 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1db85881b9dde3bc9c60fed94b1a62a689493603329b376124848c707df4108"} err="failed to get container status \"e1db85881b9dde3bc9c60fed94b1a62a689493603329b376124848c707df4108\": rpc error: code = NotFound desc = could not find container \"e1db85881b9dde3bc9c60fed94b1a62a689493603329b376124848c707df4108\": container with ID starting with e1db85881b9dde3bc9c60fed94b1a62a689493603329b376124848c707df4108 not found: ID does not exist" Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.956305 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-lpg4n"] Feb 18 00:47:21 crc kubenswrapper[4858]: I0218 00:47:21.960532 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-lpg4n"] Feb 18 00:47:23 crc kubenswrapper[4858]: I0218 00:47:23.039240 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh"] Feb 18 00:47:23 crc kubenswrapper[4858]: E0218 00:47:23.039950 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a82bb6ce-4801-417a-a4e2-93d1667999ee" containerName="console" Feb 18 00:47:23 crc kubenswrapper[4858]: I0218 00:47:23.039978 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a82bb6ce-4801-417a-a4e2-93d1667999ee" containerName="console" Feb 18 00:47:23 crc kubenswrapper[4858]: I0218 00:47:23.040239 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a82bb6ce-4801-417a-a4e2-93d1667999ee" containerName="console" Feb 18 00:47:23 crc kubenswrapper[4858]: I0218 00:47:23.043395 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh" Feb 18 00:47:23 crc kubenswrapper[4858]: I0218 00:47:23.046029 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 00:47:23 crc kubenswrapper[4858]: I0218 00:47:23.056875 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh"] Feb 18 00:47:23 crc kubenswrapper[4858]: I0218 00:47:23.072377 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f00b2490-8dc9-4640-924a-0d90a2bca37e-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh\" (UID: \"f00b2490-8dc9-4640-924a-0d90a2bca37e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh" Feb 18 00:47:23 crc kubenswrapper[4858]: I0218 00:47:23.072507 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzbqc\" (UniqueName: \"kubernetes.io/projected/f00b2490-8dc9-4640-924a-0d90a2bca37e-kube-api-access-xzbqc\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh\" (UID: \"f00b2490-8dc9-4640-924a-0d90a2bca37e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh" Feb 18 00:47:23 crc kubenswrapper[4858]: I0218 00:47:23.072731 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f00b2490-8dc9-4640-924a-0d90a2bca37e-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh\" (UID: \"f00b2490-8dc9-4640-924a-0d90a2bca37e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh" Feb 18 00:47:23 crc kubenswrapper[4858]: I0218 00:47:23.173948 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzbqc\" (UniqueName: \"kubernetes.io/projected/f00b2490-8dc9-4640-924a-0d90a2bca37e-kube-api-access-xzbqc\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh\" (UID: \"f00b2490-8dc9-4640-924a-0d90a2bca37e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh" Feb 18 00:47:23 crc kubenswrapper[4858]: I0218 00:47:23.174007 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f00b2490-8dc9-4640-924a-0d90a2bca37e-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh\" (UID: \"f00b2490-8dc9-4640-924a-0d90a2bca37e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh" Feb 18 00:47:23 crc kubenswrapper[4858]: I0218 00:47:23.174068 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f00b2490-8dc9-4640-924a-0d90a2bca37e-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh\" (UID: \"f00b2490-8dc9-4640-924a-0d90a2bca37e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh" Feb 18 00:47:23 crc kubenswrapper[4858]: I0218 00:47:23.174654 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f00b2490-8dc9-4640-924a-0d90a2bca37e-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh\" (UID: \"f00b2490-8dc9-4640-924a-0d90a2bca37e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh" Feb 18 00:47:23 crc kubenswrapper[4858]: I0218 00:47:23.174750 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f00b2490-8dc9-4640-924a-0d90a2bca37e-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh\" (UID: \"f00b2490-8dc9-4640-924a-0d90a2bca37e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh" Feb 18 00:47:23 crc kubenswrapper[4858]: I0218 00:47:23.201940 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzbqc\" (UniqueName: \"kubernetes.io/projected/f00b2490-8dc9-4640-924a-0d90a2bca37e-kube-api-access-xzbqc\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh\" (UID: \"f00b2490-8dc9-4640-924a-0d90a2bca37e\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh" Feb 18 00:47:23 crc kubenswrapper[4858]: I0218 00:47:23.372745 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh" Feb 18 00:47:23 crc kubenswrapper[4858]: I0218 00:47:23.429582 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a82bb6ce-4801-417a-a4e2-93d1667999ee" path="/var/lib/kubelet/pods/a82bb6ce-4801-417a-a4e2-93d1667999ee/volumes" Feb 18 00:47:23 crc kubenswrapper[4858]: I0218 00:47:23.612916 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh"] Feb 18 00:47:23 crc kubenswrapper[4858]: I0218 00:47:23.962279 4858 generic.go:334] "Generic (PLEG): container finished" podID="f00b2490-8dc9-4640-924a-0d90a2bca37e" containerID="34806e8bf12cff4d69e3e38fd4fc5153cc6657ca1baf1af9d97623df3df09fac" exitCode=0 Feb 18 00:47:23 crc kubenswrapper[4858]: I0218 00:47:23.962336 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh" event={"ID":"f00b2490-8dc9-4640-924a-0d90a2bca37e","Type":"ContainerDied","Data":"34806e8bf12cff4d69e3e38fd4fc5153cc6657ca1baf1af9d97623df3df09fac"} Feb 18 00:47:23 crc kubenswrapper[4858]: I0218 00:47:23.962376 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh" event={"ID":"f00b2490-8dc9-4640-924a-0d90a2bca37e","Type":"ContainerStarted","Data":"4755731501a12f5ed68045763ac707f0f5795f285af5c67b8dee989a0e044527"} Feb 18 00:47:25 crc kubenswrapper[4858]: I0218 00:47:25.369286 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bb5gn"] Feb 18 00:47:25 crc kubenswrapper[4858]: I0218 00:47:25.373642 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bb5gn" Feb 18 00:47:25 crc kubenswrapper[4858]: I0218 00:47:25.394811 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bb5gn"] Feb 18 00:47:25 crc kubenswrapper[4858]: I0218 00:47:25.410217 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56rfb\" (UniqueName: \"kubernetes.io/projected/5359939f-23c3-429b-9bd5-7826472d7333-kube-api-access-56rfb\") pod \"redhat-operators-bb5gn\" (UID: \"5359939f-23c3-429b-9bd5-7826472d7333\") " pod="openshift-marketplace/redhat-operators-bb5gn" Feb 18 00:47:25 crc kubenswrapper[4858]: I0218 00:47:25.410306 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5359939f-23c3-429b-9bd5-7826472d7333-catalog-content\") pod \"redhat-operators-bb5gn\" (UID: \"5359939f-23c3-429b-9bd5-7826472d7333\") " pod="openshift-marketplace/redhat-operators-bb5gn" Feb 18 00:47:25 crc kubenswrapper[4858]: I0218 00:47:25.410376 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5359939f-23c3-429b-9bd5-7826472d7333-utilities\") pod \"redhat-operators-bb5gn\" (UID: \"5359939f-23c3-429b-9bd5-7826472d7333\") " pod="openshift-marketplace/redhat-operators-bb5gn" Feb 18 00:47:25 crc kubenswrapper[4858]: I0218 00:47:25.512092 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5359939f-23c3-429b-9bd5-7826472d7333-utilities\") pod \"redhat-operators-bb5gn\" (UID: \"5359939f-23c3-429b-9bd5-7826472d7333\") " pod="openshift-marketplace/redhat-operators-bb5gn" Feb 18 00:47:25 crc kubenswrapper[4858]: I0218 00:47:25.512219 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56rfb\" (UniqueName: \"kubernetes.io/projected/5359939f-23c3-429b-9bd5-7826472d7333-kube-api-access-56rfb\") pod \"redhat-operators-bb5gn\" (UID: \"5359939f-23c3-429b-9bd5-7826472d7333\") " pod="openshift-marketplace/redhat-operators-bb5gn" Feb 18 00:47:25 crc kubenswrapper[4858]: I0218 00:47:25.512265 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5359939f-23c3-429b-9bd5-7826472d7333-catalog-content\") pod \"redhat-operators-bb5gn\" (UID: \"5359939f-23c3-429b-9bd5-7826472d7333\") " pod="openshift-marketplace/redhat-operators-bb5gn" Feb 18 00:47:25 crc kubenswrapper[4858]: I0218 00:47:25.513110 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5359939f-23c3-429b-9bd5-7826472d7333-utilities\") pod \"redhat-operators-bb5gn\" (UID: \"5359939f-23c3-429b-9bd5-7826472d7333\") " pod="openshift-marketplace/redhat-operators-bb5gn" Feb 18 00:47:25 crc kubenswrapper[4858]: I0218 00:47:25.513663 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5359939f-23c3-429b-9bd5-7826472d7333-catalog-content\") pod \"redhat-operators-bb5gn\" (UID: \"5359939f-23c3-429b-9bd5-7826472d7333\") " pod="openshift-marketplace/redhat-operators-bb5gn" Feb 18 00:47:25 crc kubenswrapper[4858]: I0218 00:47:25.553603 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56rfb\" (UniqueName: \"kubernetes.io/projected/5359939f-23c3-429b-9bd5-7826472d7333-kube-api-access-56rfb\") pod \"redhat-operators-bb5gn\" (UID: \"5359939f-23c3-429b-9bd5-7826472d7333\") " pod="openshift-marketplace/redhat-operators-bb5gn" Feb 18 00:47:25 crc kubenswrapper[4858]: I0218 00:47:25.731645 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bb5gn" Feb 18 00:47:25 crc kubenswrapper[4858]: I0218 00:47:25.954090 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bb5gn"] Feb 18 00:47:25 crc kubenswrapper[4858]: W0218 00:47:25.965278 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5359939f_23c3_429b_9bd5_7826472d7333.slice/crio-2b69bb162254a862fda44712c74c8085347891b10e4fc29289a6dd8c9c933e0a WatchSource:0}: Error finding container 2b69bb162254a862fda44712c74c8085347891b10e4fc29289a6dd8c9c933e0a: Status 404 returned error can't find the container with id 2b69bb162254a862fda44712c74c8085347891b10e4fc29289a6dd8c9c933e0a Feb 18 00:47:25 crc kubenswrapper[4858]: I0218 00:47:25.978974 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bb5gn" event={"ID":"5359939f-23c3-429b-9bd5-7826472d7333","Type":"ContainerStarted","Data":"2b69bb162254a862fda44712c74c8085347891b10e4fc29289a6dd8c9c933e0a"} Feb 18 00:47:26 crc kubenswrapper[4858]: I0218 00:47:26.984400 4858 generic.go:334] "Generic (PLEG): container finished" podID="5359939f-23c3-429b-9bd5-7826472d7333" containerID="f807440ea4f3dd849a4fc175dd3ddcc5f862ff6faf6206be89049dc1ba290d8d" exitCode=0 Feb 18 00:47:26 crc kubenswrapper[4858]: I0218 00:47:26.984447 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bb5gn" event={"ID":"5359939f-23c3-429b-9bd5-7826472d7333","Type":"ContainerDied","Data":"f807440ea4f3dd849a4fc175dd3ddcc5f862ff6faf6206be89049dc1ba290d8d"} Feb 18 00:47:28 crc kubenswrapper[4858]: I0218 00:47:28.003094 4858 generic.go:334] "Generic (PLEG): container finished" podID="f00b2490-8dc9-4640-924a-0d90a2bca37e" containerID="9f9c698e2d988f5e35de9d9e9db92f743eedf3553929fdd186f1c5dfc397ca5a" exitCode=0 Feb 18 00:47:28 crc kubenswrapper[4858]: I0218 00:47:28.003179 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh" event={"ID":"f00b2490-8dc9-4640-924a-0d90a2bca37e","Type":"ContainerDied","Data":"9f9c698e2d988f5e35de9d9e9db92f743eedf3553929fdd186f1c5dfc397ca5a"} Feb 18 00:47:29 crc kubenswrapper[4858]: I0218 00:47:29.014258 4858 generic.go:334] "Generic (PLEG): container finished" podID="5359939f-23c3-429b-9bd5-7826472d7333" containerID="5eb2202882517a06bafb0fb5c0291a7a488a591448fa0884f7dc9650280dd1a1" exitCode=0 Feb 18 00:47:29 crc kubenswrapper[4858]: I0218 00:47:29.014325 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bb5gn" event={"ID":"5359939f-23c3-429b-9bd5-7826472d7333","Type":"ContainerDied","Data":"5eb2202882517a06bafb0fb5c0291a7a488a591448fa0884f7dc9650280dd1a1"} Feb 18 00:47:29 crc kubenswrapper[4858]: I0218 00:47:29.017607 4858 generic.go:334] "Generic (PLEG): container finished" podID="f00b2490-8dc9-4640-924a-0d90a2bca37e" containerID="065788b8eae35bddc5576f3ce6bfe95f521d69615f1e4f49f22efd3d4c0fc988" exitCode=0 Feb 18 00:47:29 crc kubenswrapper[4858]: I0218 00:47:29.017655 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh" event={"ID":"f00b2490-8dc9-4640-924a-0d90a2bca37e","Type":"ContainerDied","Data":"065788b8eae35bddc5576f3ce6bfe95f521d69615f1e4f49f22efd3d4c0fc988"} Feb 18 00:47:30 crc kubenswrapper[4858]: I0218 00:47:30.028199 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bb5gn" event={"ID":"5359939f-23c3-429b-9bd5-7826472d7333","Type":"ContainerStarted","Data":"b0c2d5bca6148c44cf1784c32e2d9f475624a18b26205b7c2df51f73342a890a"} Feb 18 00:47:30 crc kubenswrapper[4858]: I0218 00:47:30.073798 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bb5gn" podStartSLOduration=2.5398595840000002 podStartE2EDuration="5.073761163s" podCreationTimestamp="2026-02-18 00:47:25 +0000 UTC" firstStartedPulling="2026-02-18 00:47:26.985726746 +0000 UTC m=+800.291563478" lastFinishedPulling="2026-02-18 00:47:29.519628325 +0000 UTC m=+802.825465057" observedRunningTime="2026-02-18 00:47:30.065088275 +0000 UTC m=+803.370925037" watchObservedRunningTime="2026-02-18 00:47:30.073761163 +0000 UTC m=+803.379597945" Feb 18 00:47:30 crc kubenswrapper[4858]: I0218 00:47:30.303603 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh" Feb 18 00:47:30 crc kubenswrapper[4858]: I0218 00:47:30.380242 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f00b2490-8dc9-4640-924a-0d90a2bca37e-util\") pod \"f00b2490-8dc9-4640-924a-0d90a2bca37e\" (UID: \"f00b2490-8dc9-4640-924a-0d90a2bca37e\") " Feb 18 00:47:30 crc kubenswrapper[4858]: I0218 00:47:30.380311 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f00b2490-8dc9-4640-924a-0d90a2bca37e-bundle\") pod \"f00b2490-8dc9-4640-924a-0d90a2bca37e\" (UID: \"f00b2490-8dc9-4640-924a-0d90a2bca37e\") " Feb 18 00:47:30 crc kubenswrapper[4858]: I0218 00:47:30.380347 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzbqc\" (UniqueName: \"kubernetes.io/projected/f00b2490-8dc9-4640-924a-0d90a2bca37e-kube-api-access-xzbqc\") pod \"f00b2490-8dc9-4640-924a-0d90a2bca37e\" (UID: \"f00b2490-8dc9-4640-924a-0d90a2bca37e\") " Feb 18 00:47:30 crc kubenswrapper[4858]: I0218 00:47:30.381815 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f00b2490-8dc9-4640-924a-0d90a2bca37e-bundle" (OuterVolumeSpecName: "bundle") pod "f00b2490-8dc9-4640-924a-0d90a2bca37e" (UID: "f00b2490-8dc9-4640-924a-0d90a2bca37e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:47:30 crc kubenswrapper[4858]: I0218 00:47:30.386171 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f00b2490-8dc9-4640-924a-0d90a2bca37e-kube-api-access-xzbqc" (OuterVolumeSpecName: "kube-api-access-xzbqc") pod "f00b2490-8dc9-4640-924a-0d90a2bca37e" (UID: "f00b2490-8dc9-4640-924a-0d90a2bca37e"). InnerVolumeSpecName "kube-api-access-xzbqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:30 crc kubenswrapper[4858]: I0218 00:47:30.390534 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f00b2490-8dc9-4640-924a-0d90a2bca37e-util" (OuterVolumeSpecName: "util") pod "f00b2490-8dc9-4640-924a-0d90a2bca37e" (UID: "f00b2490-8dc9-4640-924a-0d90a2bca37e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:47:30 crc kubenswrapper[4858]: I0218 00:47:30.482128 4858 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f00b2490-8dc9-4640-924a-0d90a2bca37e-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:30 crc kubenswrapper[4858]: I0218 00:47:30.482161 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzbqc\" (UniqueName: \"kubernetes.io/projected/f00b2490-8dc9-4640-924a-0d90a2bca37e-kube-api-access-xzbqc\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:30 crc kubenswrapper[4858]: I0218 00:47:30.482171 4858 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f00b2490-8dc9-4640-924a-0d90a2bca37e-util\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:31 crc kubenswrapper[4858]: I0218 00:47:31.036768 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh" event={"ID":"f00b2490-8dc9-4640-924a-0d90a2bca37e","Type":"ContainerDied","Data":"4755731501a12f5ed68045763ac707f0f5795f285af5c67b8dee989a0e044527"} Feb 18 00:47:31 crc kubenswrapper[4858]: I0218 00:47:31.037103 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4755731501a12f5ed68045763ac707f0f5795f285af5c67b8dee989a0e044527" Feb 18 00:47:31 crc kubenswrapper[4858]: I0218 00:47:31.036828 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh" Feb 18 00:47:35 crc kubenswrapper[4858]: I0218 00:47:35.732462 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bb5gn" Feb 18 00:47:35 crc kubenswrapper[4858]: I0218 00:47:35.732793 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bb5gn" Feb 18 00:47:36 crc kubenswrapper[4858]: I0218 00:47:36.800482 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bb5gn" podUID="5359939f-23c3-429b-9bd5-7826472d7333" containerName="registry-server" probeResult="failure" output=< Feb 18 00:47:36 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Feb 18 00:47:36 crc kubenswrapper[4858]: > Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.218137 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5fdf7d4974-w5ljk"] Feb 18 00:47:38 crc kubenswrapper[4858]: E0218 00:47:38.218357 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f00b2490-8dc9-4640-924a-0d90a2bca37e" containerName="util" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.218369 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f00b2490-8dc9-4640-924a-0d90a2bca37e" containerName="util" Feb 18 00:47:38 crc kubenswrapper[4858]: E0218 00:47:38.218381 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f00b2490-8dc9-4640-924a-0d90a2bca37e" containerName="extract" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.218387 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f00b2490-8dc9-4640-924a-0d90a2bca37e" containerName="extract" Feb 18 00:47:38 crc kubenswrapper[4858]: E0218 00:47:38.218410 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f00b2490-8dc9-4640-924a-0d90a2bca37e" containerName="pull" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.218416 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f00b2490-8dc9-4640-924a-0d90a2bca37e" containerName="pull" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.218537 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f00b2490-8dc9-4640-924a-0d90a2bca37e" containerName="extract" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.218927 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5fdf7d4974-w5ljk" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.220381 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.220908 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-xrnzs" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.221266 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.221272 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.224731 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.244529 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5fdf7d4974-w5ljk"] Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.307793 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a459fc2d-abc9-40ac-9834-23438e1d8d3d-apiservice-cert\") pod \"metallb-operator-controller-manager-5fdf7d4974-w5ljk\" (UID: \"a459fc2d-abc9-40ac-9834-23438e1d8d3d\") " pod="metallb-system/metallb-operator-controller-manager-5fdf7d4974-w5ljk" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.307875 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a459fc2d-abc9-40ac-9834-23438e1d8d3d-webhook-cert\") pod \"metallb-operator-controller-manager-5fdf7d4974-w5ljk\" (UID: \"a459fc2d-abc9-40ac-9834-23438e1d8d3d\") " pod="metallb-system/metallb-operator-controller-manager-5fdf7d4974-w5ljk" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.307911 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pkm6\" (UniqueName: \"kubernetes.io/projected/a459fc2d-abc9-40ac-9834-23438e1d8d3d-kube-api-access-2pkm6\") pod \"metallb-operator-controller-manager-5fdf7d4974-w5ljk\" (UID: \"a459fc2d-abc9-40ac-9834-23438e1d8d3d\") " pod="metallb-system/metallb-operator-controller-manager-5fdf7d4974-w5ljk" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.409061 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pkm6\" (UniqueName: \"kubernetes.io/projected/a459fc2d-abc9-40ac-9834-23438e1d8d3d-kube-api-access-2pkm6\") pod \"metallb-operator-controller-manager-5fdf7d4974-w5ljk\" (UID: \"a459fc2d-abc9-40ac-9834-23438e1d8d3d\") " pod="metallb-system/metallb-operator-controller-manager-5fdf7d4974-w5ljk" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.409661 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a459fc2d-abc9-40ac-9834-23438e1d8d3d-apiservice-cert\") pod \"metallb-operator-controller-manager-5fdf7d4974-w5ljk\" (UID: \"a459fc2d-abc9-40ac-9834-23438e1d8d3d\") " pod="metallb-system/metallb-operator-controller-manager-5fdf7d4974-w5ljk" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.410601 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a459fc2d-abc9-40ac-9834-23438e1d8d3d-webhook-cert\") pod \"metallb-operator-controller-manager-5fdf7d4974-w5ljk\" (UID: \"a459fc2d-abc9-40ac-9834-23438e1d8d3d\") " pod="metallb-system/metallb-operator-controller-manager-5fdf7d4974-w5ljk" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.419671 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a459fc2d-abc9-40ac-9834-23438e1d8d3d-webhook-cert\") pod \"metallb-operator-controller-manager-5fdf7d4974-w5ljk\" (UID: \"a459fc2d-abc9-40ac-9834-23438e1d8d3d\") " pod="metallb-system/metallb-operator-controller-manager-5fdf7d4974-w5ljk" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.431364 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a459fc2d-abc9-40ac-9834-23438e1d8d3d-apiservice-cert\") pod \"metallb-operator-controller-manager-5fdf7d4974-w5ljk\" (UID: \"a459fc2d-abc9-40ac-9834-23438e1d8d3d\") " pod="metallb-system/metallb-operator-controller-manager-5fdf7d4974-w5ljk" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.436484 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pkm6\" (UniqueName: \"kubernetes.io/projected/a459fc2d-abc9-40ac-9834-23438e1d8d3d-kube-api-access-2pkm6\") pod \"metallb-operator-controller-manager-5fdf7d4974-w5ljk\" (UID: \"a459fc2d-abc9-40ac-9834-23438e1d8d3d\") " pod="metallb-system/metallb-operator-controller-manager-5fdf7d4974-w5ljk" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.534169 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5fdf7d4974-w5ljk" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.562519 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-858d9bff6d-7w8qf"] Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.563192 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-858d9bff6d-7w8qf" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.572884 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.573114 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.573290 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-2rp6g" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.574147 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-858d9bff6d-7w8qf"] Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.612600 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/23f9d825-01d5-40a5-9999-8b72fbaee043-webhook-cert\") pod \"metallb-operator-webhook-server-858d9bff6d-7w8qf\" (UID: \"23f9d825-01d5-40a5-9999-8b72fbaee043\") " pod="metallb-system/metallb-operator-webhook-server-858d9bff6d-7w8qf" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.612656 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lpsw\" (UniqueName: \"kubernetes.io/projected/23f9d825-01d5-40a5-9999-8b72fbaee043-kube-api-access-5lpsw\") pod \"metallb-operator-webhook-server-858d9bff6d-7w8qf\" (UID: \"23f9d825-01d5-40a5-9999-8b72fbaee043\") " pod="metallb-system/metallb-operator-webhook-server-858d9bff6d-7w8qf" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.612685 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/23f9d825-01d5-40a5-9999-8b72fbaee043-apiservice-cert\") pod \"metallb-operator-webhook-server-858d9bff6d-7w8qf\" (UID: \"23f9d825-01d5-40a5-9999-8b72fbaee043\") " pod="metallb-system/metallb-operator-webhook-server-858d9bff6d-7w8qf" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.714407 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lpsw\" (UniqueName: \"kubernetes.io/projected/23f9d825-01d5-40a5-9999-8b72fbaee043-kube-api-access-5lpsw\") pod \"metallb-operator-webhook-server-858d9bff6d-7w8qf\" (UID: \"23f9d825-01d5-40a5-9999-8b72fbaee043\") " pod="metallb-system/metallb-operator-webhook-server-858d9bff6d-7w8qf" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.714694 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/23f9d825-01d5-40a5-9999-8b72fbaee043-apiservice-cert\") pod \"metallb-operator-webhook-server-858d9bff6d-7w8qf\" (UID: \"23f9d825-01d5-40a5-9999-8b72fbaee043\") " pod="metallb-system/metallb-operator-webhook-server-858d9bff6d-7w8qf" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.714766 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/23f9d825-01d5-40a5-9999-8b72fbaee043-webhook-cert\") pod \"metallb-operator-webhook-server-858d9bff6d-7w8qf\" (UID: \"23f9d825-01d5-40a5-9999-8b72fbaee043\") " pod="metallb-system/metallb-operator-webhook-server-858d9bff6d-7w8qf" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.732122 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/23f9d825-01d5-40a5-9999-8b72fbaee043-apiservice-cert\") pod \"metallb-operator-webhook-server-858d9bff6d-7w8qf\" (UID: \"23f9d825-01d5-40a5-9999-8b72fbaee043\") " pod="metallb-system/metallb-operator-webhook-server-858d9bff6d-7w8qf" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.738349 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/23f9d825-01d5-40a5-9999-8b72fbaee043-webhook-cert\") pod \"metallb-operator-webhook-server-858d9bff6d-7w8qf\" (UID: \"23f9d825-01d5-40a5-9999-8b72fbaee043\") " pod="metallb-system/metallb-operator-webhook-server-858d9bff6d-7w8qf" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.747752 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lpsw\" (UniqueName: \"kubernetes.io/projected/23f9d825-01d5-40a5-9999-8b72fbaee043-kube-api-access-5lpsw\") pod \"metallb-operator-webhook-server-858d9bff6d-7w8qf\" (UID: \"23f9d825-01d5-40a5-9999-8b72fbaee043\") " pod="metallb-system/metallb-operator-webhook-server-858d9bff6d-7w8qf" Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.874986 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5fdf7d4974-w5ljk"] Feb 18 00:47:38 crc kubenswrapper[4858]: W0218 00:47:38.886443 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda459fc2d_abc9_40ac_9834_23438e1d8d3d.slice/crio-25e104fdc555d79df959e674ae72298cc7b118be1ca6d407a0d01a2510982e77 WatchSource:0}: Error finding container 25e104fdc555d79df959e674ae72298cc7b118be1ca6d407a0d01a2510982e77: Status 404 returned error can't find the container with id 25e104fdc555d79df959e674ae72298cc7b118be1ca6d407a0d01a2510982e77 Feb 18 00:47:38 crc kubenswrapper[4858]: I0218 00:47:38.909033 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-858d9bff6d-7w8qf" Feb 18 00:47:39 crc kubenswrapper[4858]: I0218 00:47:39.082127 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5fdf7d4974-w5ljk" event={"ID":"a459fc2d-abc9-40ac-9834-23438e1d8d3d","Type":"ContainerStarted","Data":"25e104fdc555d79df959e674ae72298cc7b118be1ca6d407a0d01a2510982e77"} Feb 18 00:47:39 crc kubenswrapper[4858]: I0218 00:47:39.358364 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-858d9bff6d-7w8qf"] Feb 18 00:47:39 crc kubenswrapper[4858]: W0218 00:47:39.364979 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23f9d825_01d5_40a5_9999_8b72fbaee043.slice/crio-d0d00a481f2bc1ca380d9a7340791161814f14afcbda3708e9d05fa1c0c4bea2 WatchSource:0}: Error finding container d0d00a481f2bc1ca380d9a7340791161814f14afcbda3708e9d05fa1c0c4bea2: Status 404 returned error can't find the container with id d0d00a481f2bc1ca380d9a7340791161814f14afcbda3708e9d05fa1c0c4bea2 Feb 18 00:47:40 crc kubenswrapper[4858]: I0218 00:47:40.088789 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-858d9bff6d-7w8qf" event={"ID":"23f9d825-01d5-40a5-9999-8b72fbaee043","Type":"ContainerStarted","Data":"d0d00a481f2bc1ca380d9a7340791161814f14afcbda3708e9d05fa1c0c4bea2"} Feb 18 00:47:43 crc kubenswrapper[4858]: I0218 00:47:43.113567 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5fdf7d4974-w5ljk" event={"ID":"a459fc2d-abc9-40ac-9834-23438e1d8d3d","Type":"ContainerStarted","Data":"f89fc05af8f2e36ada294921e58a77e2694c439366a31e58db6652ad84d3c593"} Feb 18 00:47:43 crc kubenswrapper[4858]: I0218 00:47:43.115344 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5fdf7d4974-w5ljk" Feb 18 00:47:43 crc kubenswrapper[4858]: I0218 00:47:43.159645 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5fdf7d4974-w5ljk" podStartSLOduration=1.771353145 podStartE2EDuration="5.159626927s" podCreationTimestamp="2026-02-18 00:47:38 +0000 UTC" firstStartedPulling="2026-02-18 00:47:38.888076456 +0000 UTC m=+812.193913188" lastFinishedPulling="2026-02-18 00:47:42.276350228 +0000 UTC m=+815.582186970" observedRunningTime="2026-02-18 00:47:43.159068214 +0000 UTC m=+816.464904956" watchObservedRunningTime="2026-02-18 00:47:43.159626927 +0000 UTC m=+816.465463659" Feb 18 00:47:44 crc kubenswrapper[4858]: I0218 00:47:44.135490 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-858d9bff6d-7w8qf" event={"ID":"23f9d825-01d5-40a5-9999-8b72fbaee043","Type":"ContainerStarted","Data":"a8ede75a847363d89cedd154dc2ff9f9d5bdff01c05d232fdd837d477f417f2e"} Feb 18 00:47:44 crc kubenswrapper[4858]: I0218 00:47:44.136167 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-858d9bff6d-7w8qf" Feb 18 00:47:44 crc kubenswrapper[4858]: I0218 00:47:44.175045 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-858d9bff6d-7w8qf" podStartSLOduration=1.646863233 podStartE2EDuration="6.175023412s" podCreationTimestamp="2026-02-18 00:47:38 +0000 UTC" firstStartedPulling="2026-02-18 00:47:39.368090112 +0000 UTC m=+812.673926844" lastFinishedPulling="2026-02-18 00:47:43.896250261 +0000 UTC m=+817.202087023" observedRunningTime="2026-02-18 00:47:44.166292611 +0000 UTC m=+817.472129353" watchObservedRunningTime="2026-02-18 00:47:44.175023412 +0000 UTC m=+817.480860154" Feb 18 00:47:45 crc kubenswrapper[4858]: I0218 00:47:45.792576 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bb5gn" Feb 18 00:47:45 crc kubenswrapper[4858]: I0218 00:47:45.856392 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bb5gn" Feb 18 00:47:47 crc kubenswrapper[4858]: I0218 00:47:47.755429 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bb5gn"] Feb 18 00:47:47 crc kubenswrapper[4858]: I0218 00:47:47.755954 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bb5gn" podUID="5359939f-23c3-429b-9bd5-7826472d7333" containerName="registry-server" containerID="cri-o://b0c2d5bca6148c44cf1784c32e2d9f475624a18b26205b7c2df51f73342a890a" gracePeriod=2 Feb 18 00:47:48 crc kubenswrapper[4858]: I0218 00:47:48.167240 4858 generic.go:334] "Generic (PLEG): container finished" podID="5359939f-23c3-429b-9bd5-7826472d7333" containerID="b0c2d5bca6148c44cf1784c32e2d9f475624a18b26205b7c2df51f73342a890a" exitCode=0 Feb 18 00:47:48 crc kubenswrapper[4858]: I0218 00:47:48.167294 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bb5gn" event={"ID":"5359939f-23c3-429b-9bd5-7826472d7333","Type":"ContainerDied","Data":"b0c2d5bca6148c44cf1784c32e2d9f475624a18b26205b7c2df51f73342a890a"} Feb 18 00:47:48 crc kubenswrapper[4858]: I0218 00:47:48.167532 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bb5gn" event={"ID":"5359939f-23c3-429b-9bd5-7826472d7333","Type":"ContainerDied","Data":"2b69bb162254a862fda44712c74c8085347891b10e4fc29289a6dd8c9c933e0a"} Feb 18 00:47:48 crc kubenswrapper[4858]: I0218 00:47:48.167547 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b69bb162254a862fda44712c74c8085347891b10e4fc29289a6dd8c9c933e0a" Feb 18 00:47:48 crc kubenswrapper[4858]: I0218 00:47:48.168045 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bb5gn" Feb 18 00:47:48 crc kubenswrapper[4858]: I0218 00:47:48.260099 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5359939f-23c3-429b-9bd5-7826472d7333-catalog-content\") pod \"5359939f-23c3-429b-9bd5-7826472d7333\" (UID: \"5359939f-23c3-429b-9bd5-7826472d7333\") " Feb 18 00:47:48 crc kubenswrapper[4858]: I0218 00:47:48.260211 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5359939f-23c3-429b-9bd5-7826472d7333-utilities\") pod \"5359939f-23c3-429b-9bd5-7826472d7333\" (UID: \"5359939f-23c3-429b-9bd5-7826472d7333\") " Feb 18 00:47:48 crc kubenswrapper[4858]: I0218 00:47:48.260355 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56rfb\" (UniqueName: \"kubernetes.io/projected/5359939f-23c3-429b-9bd5-7826472d7333-kube-api-access-56rfb\") pod \"5359939f-23c3-429b-9bd5-7826472d7333\" (UID: \"5359939f-23c3-429b-9bd5-7826472d7333\") " Feb 18 00:47:48 crc kubenswrapper[4858]: I0218 00:47:48.261937 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5359939f-23c3-429b-9bd5-7826472d7333-utilities" (OuterVolumeSpecName: "utilities") pod "5359939f-23c3-429b-9bd5-7826472d7333" (UID: "5359939f-23c3-429b-9bd5-7826472d7333"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:47:48 crc kubenswrapper[4858]: I0218 00:47:48.269699 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5359939f-23c3-429b-9bd5-7826472d7333-kube-api-access-56rfb" (OuterVolumeSpecName: "kube-api-access-56rfb") pod "5359939f-23c3-429b-9bd5-7826472d7333" (UID: "5359939f-23c3-429b-9bd5-7826472d7333"). InnerVolumeSpecName "kube-api-access-56rfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:48 crc kubenswrapper[4858]: I0218 00:47:48.364490 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5359939f-23c3-429b-9bd5-7826472d7333-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:48 crc kubenswrapper[4858]: I0218 00:47:48.364550 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56rfb\" (UniqueName: \"kubernetes.io/projected/5359939f-23c3-429b-9bd5-7826472d7333-kube-api-access-56rfb\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:48 crc kubenswrapper[4858]: I0218 00:47:48.406818 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5359939f-23c3-429b-9bd5-7826472d7333-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5359939f-23c3-429b-9bd5-7826472d7333" (UID: "5359939f-23c3-429b-9bd5-7826472d7333"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:47:48 crc kubenswrapper[4858]: I0218 00:47:48.465651 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5359939f-23c3-429b-9bd5-7826472d7333-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:49 crc kubenswrapper[4858]: I0218 00:47:49.173791 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bb5gn" Feb 18 00:47:49 crc kubenswrapper[4858]: I0218 00:47:49.213563 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bb5gn"] Feb 18 00:47:49 crc kubenswrapper[4858]: I0218 00:47:49.216121 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bb5gn"] Feb 18 00:47:49 crc kubenswrapper[4858]: I0218 00:47:49.428957 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5359939f-23c3-429b-9bd5-7826472d7333" path="/var/lib/kubelet/pods/5359939f-23c3-429b-9bd5-7826472d7333/volumes" Feb 18 00:47:58 crc kubenswrapper[4858]: I0218 00:47:58.914121 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-858d9bff6d-7w8qf" Feb 18 00:48:18 crc kubenswrapper[4858]: I0218 00:48:18.536182 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5fdf7d4974-w5ljk" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.296716 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-bdtxp"] Feb 18 00:48:19 crc kubenswrapper[4858]: E0218 00:48:19.297346 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5359939f-23c3-429b-9bd5-7826472d7333" containerName="registry-server" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.297411 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5359939f-23c3-429b-9bd5-7826472d7333" containerName="registry-server" Feb 18 00:48:19 crc kubenswrapper[4858]: E0218 00:48:19.297473 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5359939f-23c3-429b-9bd5-7826472d7333" containerName="extract-content" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.297551 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5359939f-23c3-429b-9bd5-7826472d7333" containerName="extract-content" Feb 18 00:48:19 crc kubenswrapper[4858]: E0218 00:48:19.297618 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5359939f-23c3-429b-9bd5-7826472d7333" containerName="extract-utilities" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.297671 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5359939f-23c3-429b-9bd5-7826472d7333" containerName="extract-utilities" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.297822 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5359939f-23c3-429b-9bd5-7826472d7333" containerName="registry-server" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.299710 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.301885 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.302159 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.302444 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-nlm9f" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.347020 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-qv72m"] Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.348373 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qv72m" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.350238 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.361147 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-qv72m"] Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.427821 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/83a08fae-fbbe-420a-a998-b8ecafd45b71-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-qv72m\" (UID: \"83a08fae-fbbe-420a-a998-b8ecafd45b71\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qv72m" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.427854 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/98aca645-8ef3-479a-9b7b-732ad5f24375-metrics\") pod \"frr-k8s-bdtxp\" (UID: \"98aca645-8ef3-479a-9b7b-732ad5f24375\") " pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.427878 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/98aca645-8ef3-479a-9b7b-732ad5f24375-frr-conf\") pod \"frr-k8s-bdtxp\" (UID: \"98aca645-8ef3-479a-9b7b-732ad5f24375\") " pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.427898 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98aca645-8ef3-479a-9b7b-732ad5f24375-metrics-certs\") pod \"frr-k8s-bdtxp\" (UID: \"98aca645-8ef3-479a-9b7b-732ad5f24375\") " pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.427932 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/98aca645-8ef3-479a-9b7b-732ad5f24375-reloader\") pod \"frr-k8s-bdtxp\" (UID: \"98aca645-8ef3-479a-9b7b-732ad5f24375\") " pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.427951 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/98aca645-8ef3-479a-9b7b-732ad5f24375-frr-sockets\") pod \"frr-k8s-bdtxp\" (UID: \"98aca645-8ef3-479a-9b7b-732ad5f24375\") " pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.427955 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-9xp4k"] Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.427977 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg8nc\" (UniqueName: \"kubernetes.io/projected/98aca645-8ef3-479a-9b7b-732ad5f24375-kube-api-access-dg8nc\") pod \"frr-k8s-bdtxp\" (UID: \"98aca645-8ef3-479a-9b7b-732ad5f24375\") " pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.428005 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/98aca645-8ef3-479a-9b7b-732ad5f24375-frr-startup\") pod \"frr-k8s-bdtxp\" (UID: \"98aca645-8ef3-479a-9b7b-732ad5f24375\") " pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.428023 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wblzz\" (UniqueName: \"kubernetes.io/projected/83a08fae-fbbe-420a-a998-b8ecafd45b71-kube-api-access-wblzz\") pod \"frr-k8s-webhook-server-78b44bf5bb-qv72m\" (UID: \"83a08fae-fbbe-420a-a998-b8ecafd45b71\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qv72m" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.428851 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-9xp4k" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.430437 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.431597 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.431612 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.432102 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-ctwvp" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.452329 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-vbx9l"] Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.453131 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-vbx9l" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.454759 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.472016 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-vbx9l"] Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.528833 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/98aca645-8ef3-479a-9b7b-732ad5f24375-frr-startup\") pod \"frr-k8s-bdtxp\" (UID: \"98aca645-8ef3-479a-9b7b-732ad5f24375\") " pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.528881 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wblzz\" (UniqueName: \"kubernetes.io/projected/83a08fae-fbbe-420a-a998-b8ecafd45b71-kube-api-access-wblzz\") pod \"frr-k8s-webhook-server-78b44bf5bb-qv72m\" (UID: \"83a08fae-fbbe-420a-a998-b8ecafd45b71\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qv72m" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.528939 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/83a08fae-fbbe-420a-a998-b8ecafd45b71-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-qv72m\" (UID: \"83a08fae-fbbe-420a-a998-b8ecafd45b71\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qv72m" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.528962 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/98aca645-8ef3-479a-9b7b-732ad5f24375-metrics\") pod \"frr-k8s-bdtxp\" (UID: \"98aca645-8ef3-479a-9b7b-732ad5f24375\") " pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.528989 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3dad204a-e97b-4be0-bc97-b3327c0eaef9-memberlist\") pod \"speaker-9xp4k\" (UID: \"3dad204a-e97b-4be0-bc97-b3327c0eaef9\") " pod="metallb-system/speaker-9xp4k" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.529007 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/98aca645-8ef3-479a-9b7b-732ad5f24375-frr-conf\") pod \"frr-k8s-bdtxp\" (UID: \"98aca645-8ef3-479a-9b7b-732ad5f24375\") " pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.529034 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98aca645-8ef3-479a-9b7b-732ad5f24375-metrics-certs\") pod \"frr-k8s-bdtxp\" (UID: \"98aca645-8ef3-479a-9b7b-732ad5f24375\") " pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.529054 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/98aca645-8ef3-479a-9b7b-732ad5f24375-reloader\") pod \"frr-k8s-bdtxp\" (UID: \"98aca645-8ef3-479a-9b7b-732ad5f24375\") " pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.529070 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/650a8673-9066-448b-bab4-a90e9203dc70-metrics-certs\") pod \"controller-69bbfbf88f-vbx9l\" (UID: \"650a8673-9066-448b-bab4-a90e9203dc70\") " pod="metallb-system/controller-69bbfbf88f-vbx9l" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.529085 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3dad204a-e97b-4be0-bc97-b3327c0eaef9-metallb-excludel2\") pod \"speaker-9xp4k\" (UID: \"3dad204a-e97b-4be0-bc97-b3327c0eaef9\") " pod="metallb-system/speaker-9xp4k" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.529101 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3dad204a-e97b-4be0-bc97-b3327c0eaef9-metrics-certs\") pod \"speaker-9xp4k\" (UID: \"3dad204a-e97b-4be0-bc97-b3327c0eaef9\") " pod="metallb-system/speaker-9xp4k" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.529119 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/98aca645-8ef3-479a-9b7b-732ad5f24375-frr-sockets\") pod \"frr-k8s-bdtxp\" (UID: \"98aca645-8ef3-479a-9b7b-732ad5f24375\") " pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.529133 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/650a8673-9066-448b-bab4-a90e9203dc70-cert\") pod \"controller-69bbfbf88f-vbx9l\" (UID: \"650a8673-9066-448b-bab4-a90e9203dc70\") " pod="metallb-system/controller-69bbfbf88f-vbx9l" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.529155 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpctm\" (UniqueName: \"kubernetes.io/projected/650a8673-9066-448b-bab4-a90e9203dc70-kube-api-access-jpctm\") pod \"controller-69bbfbf88f-vbx9l\" (UID: \"650a8673-9066-448b-bab4-a90e9203dc70\") " pod="metallb-system/controller-69bbfbf88f-vbx9l" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.529179 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg8nc\" (UniqueName: \"kubernetes.io/projected/98aca645-8ef3-479a-9b7b-732ad5f24375-kube-api-access-dg8nc\") pod \"frr-k8s-bdtxp\" (UID: \"98aca645-8ef3-479a-9b7b-732ad5f24375\") " pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.529204 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk5c6\" (UniqueName: \"kubernetes.io/projected/3dad204a-e97b-4be0-bc97-b3327c0eaef9-kube-api-access-rk5c6\") pod \"speaker-9xp4k\" (UID: \"3dad204a-e97b-4be0-bc97-b3327c0eaef9\") " pod="metallb-system/speaker-9xp4k" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.529761 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/98aca645-8ef3-479a-9b7b-732ad5f24375-frr-startup\") pod \"frr-k8s-bdtxp\" (UID: \"98aca645-8ef3-479a-9b7b-732ad5f24375\") " pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.530095 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/98aca645-8ef3-479a-9b7b-732ad5f24375-frr-sockets\") pod \"frr-k8s-bdtxp\" (UID: \"98aca645-8ef3-479a-9b7b-732ad5f24375\") " pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.530149 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/98aca645-8ef3-479a-9b7b-732ad5f24375-frr-conf\") pod \"frr-k8s-bdtxp\" (UID: \"98aca645-8ef3-479a-9b7b-732ad5f24375\") " pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.530319 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/98aca645-8ef3-479a-9b7b-732ad5f24375-metrics\") pod \"frr-k8s-bdtxp\" (UID: \"98aca645-8ef3-479a-9b7b-732ad5f24375\") " pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:19 crc kubenswrapper[4858]: E0218 00:48:19.530401 4858 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 18 00:48:19 crc kubenswrapper[4858]: E0218 00:48:19.530445 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98aca645-8ef3-479a-9b7b-732ad5f24375-metrics-certs podName:98aca645-8ef3-479a-9b7b-732ad5f24375 nodeName:}" failed. No retries permitted until 2026-02-18 00:48:20.030431742 +0000 UTC m=+853.336268474 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/98aca645-8ef3-479a-9b7b-732ad5f24375-metrics-certs") pod "frr-k8s-bdtxp" (UID: "98aca645-8ef3-479a-9b7b-732ad5f24375") : secret "frr-k8s-certs-secret" not found Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.530817 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/98aca645-8ef3-479a-9b7b-732ad5f24375-reloader\") pod \"frr-k8s-bdtxp\" (UID: \"98aca645-8ef3-479a-9b7b-732ad5f24375\") " pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.541438 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/83a08fae-fbbe-420a-a998-b8ecafd45b71-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-qv72m\" (UID: \"83a08fae-fbbe-420a-a998-b8ecafd45b71\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qv72m" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.544714 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg8nc\" (UniqueName: \"kubernetes.io/projected/98aca645-8ef3-479a-9b7b-732ad5f24375-kube-api-access-dg8nc\") pod \"frr-k8s-bdtxp\" (UID: \"98aca645-8ef3-479a-9b7b-732ad5f24375\") " pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.548181 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wblzz\" (UniqueName: \"kubernetes.io/projected/83a08fae-fbbe-420a-a998-b8ecafd45b71-kube-api-access-wblzz\") pod \"frr-k8s-webhook-server-78b44bf5bb-qv72m\" (UID: \"83a08fae-fbbe-420a-a998-b8ecafd45b71\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qv72m" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.629963 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/650a8673-9066-448b-bab4-a90e9203dc70-cert\") pod \"controller-69bbfbf88f-vbx9l\" (UID: \"650a8673-9066-448b-bab4-a90e9203dc70\") " pod="metallb-system/controller-69bbfbf88f-vbx9l" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.630028 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpctm\" (UniqueName: \"kubernetes.io/projected/650a8673-9066-448b-bab4-a90e9203dc70-kube-api-access-jpctm\") pod \"controller-69bbfbf88f-vbx9l\" (UID: \"650a8673-9066-448b-bab4-a90e9203dc70\") " pod="metallb-system/controller-69bbfbf88f-vbx9l" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.630071 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk5c6\" (UniqueName: \"kubernetes.io/projected/3dad204a-e97b-4be0-bc97-b3327c0eaef9-kube-api-access-rk5c6\") pod \"speaker-9xp4k\" (UID: \"3dad204a-e97b-4be0-bc97-b3327c0eaef9\") " pod="metallb-system/speaker-9xp4k" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.630163 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3dad204a-e97b-4be0-bc97-b3327c0eaef9-memberlist\") pod \"speaker-9xp4k\" (UID: \"3dad204a-e97b-4be0-bc97-b3327c0eaef9\") " pod="metallb-system/speaker-9xp4k" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.630219 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/650a8673-9066-448b-bab4-a90e9203dc70-metrics-certs\") pod \"controller-69bbfbf88f-vbx9l\" (UID: \"650a8673-9066-448b-bab4-a90e9203dc70\") " pod="metallb-system/controller-69bbfbf88f-vbx9l" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.630246 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3dad204a-e97b-4be0-bc97-b3327c0eaef9-metallb-excludel2\") pod \"speaker-9xp4k\" (UID: \"3dad204a-e97b-4be0-bc97-b3327c0eaef9\") " pod="metallb-system/speaker-9xp4k" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.630272 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3dad204a-e97b-4be0-bc97-b3327c0eaef9-metrics-certs\") pod \"speaker-9xp4k\" (UID: \"3dad204a-e97b-4be0-bc97-b3327c0eaef9\") " pod="metallb-system/speaker-9xp4k" Feb 18 00:48:19 crc kubenswrapper[4858]: E0218 00:48:19.630402 4858 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 18 00:48:19 crc kubenswrapper[4858]: E0218 00:48:19.630465 4858 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 18 00:48:19 crc kubenswrapper[4858]: E0218 00:48:19.630473 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3dad204a-e97b-4be0-bc97-b3327c0eaef9-metrics-certs podName:3dad204a-e97b-4be0-bc97-b3327c0eaef9 nodeName:}" failed. No retries permitted until 2026-02-18 00:48:20.130446301 +0000 UTC m=+853.436283033 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3dad204a-e97b-4be0-bc97-b3327c0eaef9-metrics-certs") pod "speaker-9xp4k" (UID: "3dad204a-e97b-4be0-bc97-b3327c0eaef9") : secret "speaker-certs-secret" not found Feb 18 00:48:19 crc kubenswrapper[4858]: E0218 00:48:19.630540 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3dad204a-e97b-4be0-bc97-b3327c0eaef9-memberlist podName:3dad204a-e97b-4be0-bc97-b3327c0eaef9 nodeName:}" failed. No retries permitted until 2026-02-18 00:48:20.130525163 +0000 UTC m=+853.436361895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3dad204a-e97b-4be0-bc97-b3327c0eaef9-memberlist") pod "speaker-9xp4k" (UID: "3dad204a-e97b-4be0-bc97-b3327c0eaef9") : secret "metallb-memberlist" not found Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.631119 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3dad204a-e97b-4be0-bc97-b3327c0eaef9-metallb-excludel2\") pod \"speaker-9xp4k\" (UID: \"3dad204a-e97b-4be0-bc97-b3327c0eaef9\") " pod="metallb-system/speaker-9xp4k" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.632103 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.638036 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/650a8673-9066-448b-bab4-a90e9203dc70-metrics-certs\") pod \"controller-69bbfbf88f-vbx9l\" (UID: \"650a8673-9066-448b-bab4-a90e9203dc70\") " pod="metallb-system/controller-69bbfbf88f-vbx9l" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.643887 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/650a8673-9066-448b-bab4-a90e9203dc70-cert\") pod \"controller-69bbfbf88f-vbx9l\" (UID: \"650a8673-9066-448b-bab4-a90e9203dc70\") " pod="metallb-system/controller-69bbfbf88f-vbx9l" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.646412 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpctm\" (UniqueName: \"kubernetes.io/projected/650a8673-9066-448b-bab4-a90e9203dc70-kube-api-access-jpctm\") pod \"controller-69bbfbf88f-vbx9l\" (UID: \"650a8673-9066-448b-bab4-a90e9203dc70\") " pod="metallb-system/controller-69bbfbf88f-vbx9l" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.654386 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk5c6\" (UniqueName: \"kubernetes.io/projected/3dad204a-e97b-4be0-bc97-b3327c0eaef9-kube-api-access-rk5c6\") pod \"speaker-9xp4k\" (UID: \"3dad204a-e97b-4be0-bc97-b3327c0eaef9\") " pod="metallb-system/speaker-9xp4k" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.668480 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qv72m" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.774897 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-vbx9l" Feb 18 00:48:19 crc kubenswrapper[4858]: I0218 00:48:19.858565 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-qv72m"] Feb 18 00:48:20 crc kubenswrapper[4858]: I0218 00:48:20.035878 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98aca645-8ef3-479a-9b7b-732ad5f24375-metrics-certs\") pod \"frr-k8s-bdtxp\" (UID: \"98aca645-8ef3-479a-9b7b-732ad5f24375\") " pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:20 crc kubenswrapper[4858]: I0218 00:48:20.042410 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98aca645-8ef3-479a-9b7b-732ad5f24375-metrics-certs\") pod \"frr-k8s-bdtxp\" (UID: \"98aca645-8ef3-479a-9b7b-732ad5f24375\") " pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:20 crc kubenswrapper[4858]: I0218 00:48:20.139339 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3dad204a-e97b-4be0-bc97-b3327c0eaef9-memberlist\") pod \"speaker-9xp4k\" (UID: \"3dad204a-e97b-4be0-bc97-b3327c0eaef9\") " pod="metallb-system/speaker-9xp4k" Feb 18 00:48:20 crc kubenswrapper[4858]: I0218 00:48:20.139438 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3dad204a-e97b-4be0-bc97-b3327c0eaef9-metrics-certs\") pod \"speaker-9xp4k\" (UID: \"3dad204a-e97b-4be0-bc97-b3327c0eaef9\") " pod="metallb-system/speaker-9xp4k" Feb 18 00:48:20 crc kubenswrapper[4858]: E0218 00:48:20.140359 4858 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 18 00:48:20 crc kubenswrapper[4858]: E0218 00:48:20.140465 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3dad204a-e97b-4be0-bc97-b3327c0eaef9-memberlist podName:3dad204a-e97b-4be0-bc97-b3327c0eaef9 nodeName:}" failed. No retries permitted until 2026-02-18 00:48:21.140414337 +0000 UTC m=+854.446251099 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3dad204a-e97b-4be0-bc97-b3327c0eaef9-memberlist") pod "speaker-9xp4k" (UID: "3dad204a-e97b-4be0-bc97-b3327c0eaef9") : secret "metallb-memberlist" not found Feb 18 00:48:20 crc kubenswrapper[4858]: I0218 00:48:20.143631 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3dad204a-e97b-4be0-bc97-b3327c0eaef9-metrics-certs\") pod \"speaker-9xp4k\" (UID: \"3dad204a-e97b-4be0-bc97-b3327c0eaef9\") " pod="metallb-system/speaker-9xp4k" Feb 18 00:48:20 crc kubenswrapper[4858]: W0218 00:48:20.191461 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod650a8673_9066_448b_bab4_a90e9203dc70.slice/crio-db2e399f6bbe5b39e4b539416f48a5fb050d2659b452516f02093ea3c693e710 WatchSource:0}: Error finding container db2e399f6bbe5b39e4b539416f48a5fb050d2659b452516f02093ea3c693e710: Status 404 returned error can't find the container with id db2e399f6bbe5b39e4b539416f48a5fb050d2659b452516f02093ea3c693e710 Feb 18 00:48:20 crc kubenswrapper[4858]: I0218 00:48:20.195060 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-vbx9l"] Feb 18 00:48:20 crc kubenswrapper[4858]: I0218 00:48:20.224690 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:20 crc kubenswrapper[4858]: I0218 00:48:20.449360 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qv72m" event={"ID":"83a08fae-fbbe-420a-a998-b8ecafd45b71","Type":"ContainerStarted","Data":"b52b4a5a4e6eb8307e1f91e85e17a02cab56ab2b6da4de125242a3d2c2c037f1"} Feb 18 00:48:20 crc kubenswrapper[4858]: I0218 00:48:20.453712 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bdtxp" event={"ID":"98aca645-8ef3-479a-9b7b-732ad5f24375","Type":"ContainerStarted","Data":"ece4265b2b0b9b3e654452880fe4cb1662a7a372a7bbbced5699de7f4eac24ee"} Feb 18 00:48:20 crc kubenswrapper[4858]: I0218 00:48:20.455534 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-vbx9l" event={"ID":"650a8673-9066-448b-bab4-a90e9203dc70","Type":"ContainerStarted","Data":"c426c06cbb24f3c5941d82a811c5763707e741b3dccc813e70ed97ea638be93e"} Feb 18 00:48:20 crc kubenswrapper[4858]: I0218 00:48:20.455561 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-vbx9l" event={"ID":"650a8673-9066-448b-bab4-a90e9203dc70","Type":"ContainerStarted","Data":"db2e399f6bbe5b39e4b539416f48a5fb050d2659b452516f02093ea3c693e710"} Feb 18 00:48:21 crc kubenswrapper[4858]: I0218 00:48:21.154623 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3dad204a-e97b-4be0-bc97-b3327c0eaef9-memberlist\") pod \"speaker-9xp4k\" (UID: \"3dad204a-e97b-4be0-bc97-b3327c0eaef9\") " pod="metallb-system/speaker-9xp4k" Feb 18 00:48:21 crc kubenswrapper[4858]: I0218 00:48:21.162416 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3dad204a-e97b-4be0-bc97-b3327c0eaef9-memberlist\") pod \"speaker-9xp4k\" (UID: \"3dad204a-e97b-4be0-bc97-b3327c0eaef9\") " pod="metallb-system/speaker-9xp4k" Feb 18 00:48:21 crc kubenswrapper[4858]: I0218 00:48:21.244272 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-9xp4k" Feb 18 00:48:21 crc kubenswrapper[4858]: W0218 00:48:21.281812 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dad204a_e97b_4be0_bc97_b3327c0eaef9.slice/crio-19b32deb78436af565afdda333259196c3abe1b2ff39ac98dbacb527ea8ee431 WatchSource:0}: Error finding container 19b32deb78436af565afdda333259196c3abe1b2ff39ac98dbacb527ea8ee431: Status 404 returned error can't find the container with id 19b32deb78436af565afdda333259196c3abe1b2ff39ac98dbacb527ea8ee431 Feb 18 00:48:21 crc kubenswrapper[4858]: I0218 00:48:21.466301 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-vbx9l" event={"ID":"650a8673-9066-448b-bab4-a90e9203dc70","Type":"ContainerStarted","Data":"bc2a8466de5d8bd0818e426efa74e2425891d783126309c1d03f821bf9f7b9e5"} Feb 18 00:48:21 crc kubenswrapper[4858]: I0218 00:48:21.466677 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-vbx9l" Feb 18 00:48:21 crc kubenswrapper[4858]: I0218 00:48:21.468526 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9xp4k" event={"ID":"3dad204a-e97b-4be0-bc97-b3327c0eaef9","Type":"ContainerStarted","Data":"19b32deb78436af565afdda333259196c3abe1b2ff39ac98dbacb527ea8ee431"} Feb 18 00:48:21 crc kubenswrapper[4858]: I0218 00:48:21.493891 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-vbx9l" podStartSLOduration=2.49387 podStartE2EDuration="2.49387s" podCreationTimestamp="2026-02-18 00:48:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:48:21.486171663 +0000 UTC m=+854.792008405" watchObservedRunningTime="2026-02-18 00:48:21.49387 +0000 UTC m=+854.799706742" Feb 18 00:48:22 crc kubenswrapper[4858]: I0218 00:48:22.478953 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9xp4k" event={"ID":"3dad204a-e97b-4be0-bc97-b3327c0eaef9","Type":"ContainerStarted","Data":"02c8e227be0093821f753246f0b0cfcde75a1e257dd8764d82723a253bd20fdb"} Feb 18 00:48:22 crc kubenswrapper[4858]: I0218 00:48:22.479226 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9xp4k" event={"ID":"3dad204a-e97b-4be0-bc97-b3327c0eaef9","Type":"ContainerStarted","Data":"0a70ce0f5e866904010603a756865102e5eeaf335dee38bd2b6593f8b32c2ba3"} Feb 18 00:48:22 crc kubenswrapper[4858]: I0218 00:48:22.498105 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-9xp4k" podStartSLOduration=3.498077027 podStartE2EDuration="3.498077027s" podCreationTimestamp="2026-02-18 00:48:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:48:22.496016007 +0000 UTC m=+855.801852739" watchObservedRunningTime="2026-02-18 00:48:22.498077027 +0000 UTC m=+855.803913759" Feb 18 00:48:23 crc kubenswrapper[4858]: I0218 00:48:23.484139 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-9xp4k" Feb 18 00:48:28 crc kubenswrapper[4858]: I0218 00:48:28.539532 4858 generic.go:334] "Generic (PLEG): container finished" podID="98aca645-8ef3-479a-9b7b-732ad5f24375" containerID="b325ec5e46d50f8fa12f8dbcc63780709ee570335d8b2a9da7fda9a7e1370395" exitCode=0 Feb 18 00:48:28 crc kubenswrapper[4858]: I0218 00:48:28.539618 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bdtxp" event={"ID":"98aca645-8ef3-479a-9b7b-732ad5f24375","Type":"ContainerDied","Data":"b325ec5e46d50f8fa12f8dbcc63780709ee570335d8b2a9da7fda9a7e1370395"} Feb 18 00:48:28 crc kubenswrapper[4858]: I0218 00:48:28.543069 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qv72m" event={"ID":"83a08fae-fbbe-420a-a998-b8ecafd45b71","Type":"ContainerStarted","Data":"bbfa8e2a72dce1e11dc0e6e320b12cebe5216c634f3d81092b108a17826166c9"} Feb 18 00:48:28 crc kubenswrapper[4858]: I0218 00:48:28.543365 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qv72m" Feb 18 00:48:28 crc kubenswrapper[4858]: I0218 00:48:28.602640 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qv72m" podStartSLOduration=1.673227658 podStartE2EDuration="9.602619594s" podCreationTimestamp="2026-02-18 00:48:19 +0000 UTC" firstStartedPulling="2026-02-18 00:48:19.869856259 +0000 UTC m=+853.175693001" lastFinishedPulling="2026-02-18 00:48:27.799248205 +0000 UTC m=+861.105084937" observedRunningTime="2026-02-18 00:48:28.599268073 +0000 UTC m=+861.905104845" watchObservedRunningTime="2026-02-18 00:48:28.602619594 +0000 UTC m=+861.908456336" Feb 18 00:48:29 crc kubenswrapper[4858]: I0218 00:48:29.552292 4858 generic.go:334] "Generic (PLEG): container finished" podID="98aca645-8ef3-479a-9b7b-732ad5f24375" containerID="b31dfbae929861bd713f6054f5d58c8ee7a968ea9edaa29d13dffcc52239f5a1" exitCode=0 Feb 18 00:48:29 crc kubenswrapper[4858]: I0218 00:48:29.552397 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bdtxp" event={"ID":"98aca645-8ef3-479a-9b7b-732ad5f24375","Type":"ContainerDied","Data":"b31dfbae929861bd713f6054f5d58c8ee7a968ea9edaa29d13dffcc52239f5a1"} Feb 18 00:48:30 crc kubenswrapper[4858]: I0218 00:48:30.564215 4858 generic.go:334] "Generic (PLEG): container finished" podID="98aca645-8ef3-479a-9b7b-732ad5f24375" containerID="cb980405d81f31f34dd88be0d17c6e38322ac75364b47855d213273286fca7d1" exitCode=0 Feb 18 00:48:30 crc kubenswrapper[4858]: I0218 00:48:30.564299 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bdtxp" event={"ID":"98aca645-8ef3-479a-9b7b-732ad5f24375","Type":"ContainerDied","Data":"cb980405d81f31f34dd88be0d17c6e38322ac75364b47855d213273286fca7d1"} Feb 18 00:48:31 crc kubenswrapper[4858]: I0218 00:48:31.248191 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-9xp4k" Feb 18 00:48:31 crc kubenswrapper[4858]: I0218 00:48:31.581324 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bdtxp" event={"ID":"98aca645-8ef3-479a-9b7b-732ad5f24375","Type":"ContainerStarted","Data":"09243a3fdc06008c83b055de80d63b696d4f325e8b4126a349f33a5b4140800a"} Feb 18 00:48:31 crc kubenswrapper[4858]: I0218 00:48:31.581426 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bdtxp" event={"ID":"98aca645-8ef3-479a-9b7b-732ad5f24375","Type":"ContainerStarted","Data":"eacf7b7307d69d08c247dad9e5d07ed9edddc95a3b4c83461425f37f2580a357"} Feb 18 00:48:31 crc kubenswrapper[4858]: I0218 00:48:31.581449 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bdtxp" event={"ID":"98aca645-8ef3-479a-9b7b-732ad5f24375","Type":"ContainerStarted","Data":"bc2d9973e919c35d1aae1ea843bd5f53e5550b6642e9251580870a746c7fae43"} Feb 18 00:48:31 crc kubenswrapper[4858]: I0218 00:48:31.581469 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bdtxp" event={"ID":"98aca645-8ef3-479a-9b7b-732ad5f24375","Type":"ContainerStarted","Data":"a00d9d049559cc557b075e95aad1c2996962c5370516ecd1ff515a5aeb097049"} Feb 18 00:48:31 crc kubenswrapper[4858]: I0218 00:48:31.581574 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bdtxp" event={"ID":"98aca645-8ef3-479a-9b7b-732ad5f24375","Type":"ContainerStarted","Data":"abc5a05f4d3c3ccf3c629d088e7ac01aed1d185453b4e6183a4fe6a5de0382c7"} Feb 18 00:48:32 crc kubenswrapper[4858]: I0218 00:48:32.600464 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bdtxp" event={"ID":"98aca645-8ef3-479a-9b7b-732ad5f24375","Type":"ContainerStarted","Data":"4d41d66caea349ca1787551c8e9a6bb32b21b8405849fc87b17c6286f281036f"} Feb 18 00:48:32 crc kubenswrapper[4858]: I0218 00:48:32.601012 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:32 crc kubenswrapper[4858]: I0218 00:48:32.642419 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-bdtxp" podStartSLOduration=6.214417205 podStartE2EDuration="13.642393983s" podCreationTimestamp="2026-02-18 00:48:19 +0000 UTC" firstStartedPulling="2026-02-18 00:48:20.407188192 +0000 UTC m=+853.713024924" lastFinishedPulling="2026-02-18 00:48:27.83516497 +0000 UTC m=+861.141001702" observedRunningTime="2026-02-18 00:48:32.637237237 +0000 UTC m=+865.943074009" watchObservedRunningTime="2026-02-18 00:48:32.642393983 +0000 UTC m=+865.948230745" Feb 18 00:48:34 crc kubenswrapper[4858]: I0218 00:48:34.170604 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-t4qg5"] Feb 18 00:48:34 crc kubenswrapper[4858]: I0218 00:48:34.171862 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-t4qg5" Feb 18 00:48:34 crc kubenswrapper[4858]: W0218 00:48:34.174144 4858 reflector.go:561] object-"openstack-operators"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack-operators": no relationship found between node 'crc' and this object Feb 18 00:48:34 crc kubenswrapper[4858]: E0218 00:48:34.174198 4858 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 00:48:34 crc kubenswrapper[4858]: W0218 00:48:34.174266 4858 reflector.go:561] object-"openstack-operators"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack-operators": no relationship found between node 'crc' and this object Feb 18 00:48:34 crc kubenswrapper[4858]: E0218 00:48:34.174282 4858 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 00:48:34 crc kubenswrapper[4858]: W0218 00:48:34.174401 4858 reflector.go:561] object-"openstack-operators"/"openstack-operator-index-dockercfg-z527s": failed to list *v1.Secret: secrets "openstack-operator-index-dockercfg-z527s" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack-operators": no relationship found between node 'crc' and this object Feb 18 00:48:34 crc kubenswrapper[4858]: E0218 00:48:34.174425 4858 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-operator-index-dockercfg-z527s\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openstack-operator-index-dockercfg-z527s\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 00:48:34 crc kubenswrapper[4858]: I0218 00:48:34.189386 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-t4qg5"] Feb 18 00:48:34 crc kubenswrapper[4858]: I0218 00:48:34.264362 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82q2f\" (UniqueName: \"kubernetes.io/projected/199f5218-a364-4a01-b7e2-a08cde12e306-kube-api-access-82q2f\") pod \"openstack-operator-index-t4qg5\" (UID: \"199f5218-a364-4a01-b7e2-a08cde12e306\") " pod="openstack-operators/openstack-operator-index-t4qg5" Feb 18 00:48:34 crc kubenswrapper[4858]: I0218 00:48:34.365677 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82q2f\" (UniqueName: \"kubernetes.io/projected/199f5218-a364-4a01-b7e2-a08cde12e306-kube-api-access-82q2f\") pod \"openstack-operator-index-t4qg5\" (UID: \"199f5218-a364-4a01-b7e2-a08cde12e306\") " pod="openstack-operators/openstack-operator-index-t4qg5" Feb 18 00:48:35 crc kubenswrapper[4858]: I0218 00:48:35.012940 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-z527s" Feb 18 00:48:35 crc kubenswrapper[4858]: I0218 00:48:35.113395 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 18 00:48:35 crc kubenswrapper[4858]: I0218 00:48:35.225221 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:35 crc kubenswrapper[4858]: I0218 00:48:35.288035 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:35 crc kubenswrapper[4858]: E0218 00:48:35.376597 4858 projected.go:288] Couldn't get configMap openstack-operators/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 18 00:48:35 crc kubenswrapper[4858]: E0218 00:48:35.376679 4858 projected.go:194] Error preparing data for projected volume kube-api-access-82q2f for pod openstack-operators/openstack-operator-index-t4qg5: failed to sync configmap cache: timed out waiting for the condition Feb 18 00:48:35 crc kubenswrapper[4858]: E0218 00:48:35.376769 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/199f5218-a364-4a01-b7e2-a08cde12e306-kube-api-access-82q2f podName:199f5218-a364-4a01-b7e2-a08cde12e306 nodeName:}" failed. No retries permitted until 2026-02-18 00:48:35.8767421 +0000 UTC m=+869.182578872 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-82q2f" (UniqueName: "kubernetes.io/projected/199f5218-a364-4a01-b7e2-a08cde12e306-kube-api-access-82q2f") pod "openstack-operator-index-t4qg5" (UID: "199f5218-a364-4a01-b7e2-a08cde12e306") : failed to sync configmap cache: timed out waiting for the condition Feb 18 00:48:35 crc kubenswrapper[4858]: I0218 00:48:35.526540 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 18 00:48:35 crc kubenswrapper[4858]: I0218 00:48:35.887639 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82q2f\" (UniqueName: \"kubernetes.io/projected/199f5218-a364-4a01-b7e2-a08cde12e306-kube-api-access-82q2f\") pod \"openstack-operator-index-t4qg5\" (UID: \"199f5218-a364-4a01-b7e2-a08cde12e306\") " pod="openstack-operators/openstack-operator-index-t4qg5" Feb 18 00:48:35 crc kubenswrapper[4858]: I0218 00:48:35.897317 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82q2f\" (UniqueName: \"kubernetes.io/projected/199f5218-a364-4a01-b7e2-a08cde12e306-kube-api-access-82q2f\") pod \"openstack-operator-index-t4qg5\" (UID: \"199f5218-a364-4a01-b7e2-a08cde12e306\") " pod="openstack-operators/openstack-operator-index-t4qg5" Feb 18 00:48:35 crc kubenswrapper[4858]: I0218 00:48:35.994201 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-t4qg5" Feb 18 00:48:36 crc kubenswrapper[4858]: I0218 00:48:36.458275 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-t4qg5"] Feb 18 00:48:36 crc kubenswrapper[4858]: W0218 00:48:36.472821 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod199f5218_a364_4a01_b7e2_a08cde12e306.slice/crio-ddab5d1fb9c2bf74ce4156afd43e6aa762e78f37672e4c3c885feee32aaa9d0c WatchSource:0}: Error finding container ddab5d1fb9c2bf74ce4156afd43e6aa762e78f37672e4c3c885feee32aaa9d0c: Status 404 returned error can't find the container with id ddab5d1fb9c2bf74ce4156afd43e6aa762e78f37672e4c3c885feee32aaa9d0c Feb 18 00:48:36 crc kubenswrapper[4858]: I0218 00:48:36.627699 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-t4qg5" event={"ID":"199f5218-a364-4a01-b7e2-a08cde12e306","Type":"ContainerStarted","Data":"ddab5d1fb9c2bf74ce4156afd43e6aa762e78f37672e4c3c885feee32aaa9d0c"} Feb 18 00:48:37 crc kubenswrapper[4858]: I0218 00:48:37.553781 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-t4qg5"] Feb 18 00:48:38 crc kubenswrapper[4858]: I0218 00:48:38.162947 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-sq8rn"] Feb 18 00:48:38 crc kubenswrapper[4858]: I0218 00:48:38.166444 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sq8rn" Feb 18 00:48:38 crc kubenswrapper[4858]: I0218 00:48:38.180560 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sq8rn"] Feb 18 00:48:38 crc kubenswrapper[4858]: I0218 00:48:38.239235 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt9hd\" (UniqueName: \"kubernetes.io/projected/a0f2c0db-96cb-4884-80fe-20adeb5728cf-kube-api-access-gt9hd\") pod \"openstack-operator-index-sq8rn\" (UID: \"a0f2c0db-96cb-4884-80fe-20adeb5728cf\") " pod="openstack-operators/openstack-operator-index-sq8rn" Feb 18 00:48:38 crc kubenswrapper[4858]: I0218 00:48:38.340452 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt9hd\" (UniqueName: \"kubernetes.io/projected/a0f2c0db-96cb-4884-80fe-20adeb5728cf-kube-api-access-gt9hd\") pod \"openstack-operator-index-sq8rn\" (UID: \"a0f2c0db-96cb-4884-80fe-20adeb5728cf\") " pod="openstack-operators/openstack-operator-index-sq8rn" Feb 18 00:48:38 crc kubenswrapper[4858]: I0218 00:48:38.378235 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt9hd\" (UniqueName: \"kubernetes.io/projected/a0f2c0db-96cb-4884-80fe-20adeb5728cf-kube-api-access-gt9hd\") pod \"openstack-operator-index-sq8rn\" (UID: \"a0f2c0db-96cb-4884-80fe-20adeb5728cf\") " pod="openstack-operators/openstack-operator-index-sq8rn" Feb 18 00:48:38 crc kubenswrapper[4858]: I0218 00:48:38.500878 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sq8rn" Feb 18 00:48:39 crc kubenswrapper[4858]: I0218 00:48:39.326312 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sq8rn"] Feb 18 00:48:39 crc kubenswrapper[4858]: W0218 00:48:39.340860 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0f2c0db_96cb_4884_80fe_20adeb5728cf.slice/crio-3f03c914f5c603addb79e976ed5e2e3cfc72a60150a4a2e4999b8eeb1012200f WatchSource:0}: Error finding container 3f03c914f5c603addb79e976ed5e2e3cfc72a60150a4a2e4999b8eeb1012200f: Status 404 returned error can't find the container with id 3f03c914f5c603addb79e976ed5e2e3cfc72a60150a4a2e4999b8eeb1012200f Feb 18 00:48:39 crc kubenswrapper[4858]: I0218 00:48:39.654743 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sq8rn" event={"ID":"a0f2c0db-96cb-4884-80fe-20adeb5728cf","Type":"ContainerStarted","Data":"8f0749243a94a25fa0e90e88d5e1e6ff954d2d6ae888ec6bf3153ed0804a2f01"} Feb 18 00:48:39 crc kubenswrapper[4858]: I0218 00:48:39.655114 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sq8rn" event={"ID":"a0f2c0db-96cb-4884-80fe-20adeb5728cf","Type":"ContainerStarted","Data":"3f03c914f5c603addb79e976ed5e2e3cfc72a60150a4a2e4999b8eeb1012200f"} Feb 18 00:48:39 crc kubenswrapper[4858]: I0218 00:48:39.656785 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-t4qg5" event={"ID":"199f5218-a364-4a01-b7e2-a08cde12e306","Type":"ContainerStarted","Data":"d212f7931db4c6f2146a47ae11ff28cf857785b14d77fe8c2a2f5c1a717aeb5b"} Feb 18 00:48:39 crc kubenswrapper[4858]: I0218 00:48:39.656960 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-t4qg5" podUID="199f5218-a364-4a01-b7e2-a08cde12e306" containerName="registry-server" containerID="cri-o://d212f7931db4c6f2146a47ae11ff28cf857785b14d77fe8c2a2f5c1a717aeb5b" gracePeriod=2 Feb 18 00:48:39 crc kubenswrapper[4858]: I0218 00:48:39.678000 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-sq8rn" podStartSLOduration=1.6202512869999999 podStartE2EDuration="1.677977404s" podCreationTimestamp="2026-02-18 00:48:38 +0000 UTC" firstStartedPulling="2026-02-18 00:48:39.3452331 +0000 UTC m=+872.651069872" lastFinishedPulling="2026-02-18 00:48:39.402959257 +0000 UTC m=+872.708795989" observedRunningTime="2026-02-18 00:48:39.671202068 +0000 UTC m=+872.977038810" watchObservedRunningTime="2026-02-18 00:48:39.677977404 +0000 UTC m=+872.983814166" Feb 18 00:48:39 crc kubenswrapper[4858]: I0218 00:48:39.681992 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qv72m" Feb 18 00:48:39 crc kubenswrapper[4858]: I0218 00:48:39.690830 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-t4qg5" podStartSLOduration=3.211909351 podStartE2EDuration="5.690815137s" podCreationTimestamp="2026-02-18 00:48:34 +0000 UTC" firstStartedPulling="2026-02-18 00:48:36.479038899 +0000 UTC m=+869.784875631" lastFinishedPulling="2026-02-18 00:48:38.957944685 +0000 UTC m=+872.263781417" observedRunningTime="2026-02-18 00:48:39.688231463 +0000 UTC m=+872.994068205" watchObservedRunningTime="2026-02-18 00:48:39.690815137 +0000 UTC m=+872.996651879" Feb 18 00:48:39 crc kubenswrapper[4858]: I0218 00:48:39.780001 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-vbx9l" Feb 18 00:48:40 crc kubenswrapper[4858]: I0218 00:48:40.039276 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-t4qg5" Feb 18 00:48:40 crc kubenswrapper[4858]: I0218 00:48:40.066261 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82q2f\" (UniqueName: \"kubernetes.io/projected/199f5218-a364-4a01-b7e2-a08cde12e306-kube-api-access-82q2f\") pod \"199f5218-a364-4a01-b7e2-a08cde12e306\" (UID: \"199f5218-a364-4a01-b7e2-a08cde12e306\") " Feb 18 00:48:40 crc kubenswrapper[4858]: I0218 00:48:40.076763 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/199f5218-a364-4a01-b7e2-a08cde12e306-kube-api-access-82q2f" (OuterVolumeSpecName: "kube-api-access-82q2f") pod "199f5218-a364-4a01-b7e2-a08cde12e306" (UID: "199f5218-a364-4a01-b7e2-a08cde12e306"). InnerVolumeSpecName "kube-api-access-82q2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:48:40 crc kubenswrapper[4858]: I0218 00:48:40.168326 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82q2f\" (UniqueName: \"kubernetes.io/projected/199f5218-a364-4a01-b7e2-a08cde12e306-kube-api-access-82q2f\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:40 crc kubenswrapper[4858]: I0218 00:48:40.231851 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-bdtxp" Feb 18 00:48:40 crc kubenswrapper[4858]: I0218 00:48:40.668356 4858 generic.go:334] "Generic (PLEG): container finished" podID="199f5218-a364-4a01-b7e2-a08cde12e306" containerID="d212f7931db4c6f2146a47ae11ff28cf857785b14d77fe8c2a2f5c1a717aeb5b" exitCode=0 Feb 18 00:48:40 crc kubenswrapper[4858]: I0218 00:48:40.668477 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-t4qg5" event={"ID":"199f5218-a364-4a01-b7e2-a08cde12e306","Type":"ContainerDied","Data":"d212f7931db4c6f2146a47ae11ff28cf857785b14d77fe8c2a2f5c1a717aeb5b"} Feb 18 00:48:40 crc kubenswrapper[4858]: I0218 00:48:40.668576 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-t4qg5" event={"ID":"199f5218-a364-4a01-b7e2-a08cde12e306","Type":"ContainerDied","Data":"ddab5d1fb9c2bf74ce4156afd43e6aa762e78f37672e4c3c885feee32aaa9d0c"} Feb 18 00:48:40 crc kubenswrapper[4858]: I0218 00:48:40.668608 4858 scope.go:117] "RemoveContainer" containerID="d212f7931db4c6f2146a47ae11ff28cf857785b14d77fe8c2a2f5c1a717aeb5b" Feb 18 00:48:40 crc kubenswrapper[4858]: I0218 00:48:40.668528 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-t4qg5" Feb 18 00:48:40 crc kubenswrapper[4858]: I0218 00:48:40.691084 4858 scope.go:117] "RemoveContainer" containerID="d212f7931db4c6f2146a47ae11ff28cf857785b14d77fe8c2a2f5c1a717aeb5b" Feb 18 00:48:40 crc kubenswrapper[4858]: E0218 00:48:40.691729 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d212f7931db4c6f2146a47ae11ff28cf857785b14d77fe8c2a2f5c1a717aeb5b\": container with ID starting with d212f7931db4c6f2146a47ae11ff28cf857785b14d77fe8c2a2f5c1a717aeb5b not found: ID does not exist" containerID="d212f7931db4c6f2146a47ae11ff28cf857785b14d77fe8c2a2f5c1a717aeb5b" Feb 18 00:48:40 crc kubenswrapper[4858]: I0218 00:48:40.691793 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d212f7931db4c6f2146a47ae11ff28cf857785b14d77fe8c2a2f5c1a717aeb5b"} err="failed to get container status \"d212f7931db4c6f2146a47ae11ff28cf857785b14d77fe8c2a2f5c1a717aeb5b\": rpc error: code = NotFound desc = could not find container \"d212f7931db4c6f2146a47ae11ff28cf857785b14d77fe8c2a2f5c1a717aeb5b\": container with ID starting with d212f7931db4c6f2146a47ae11ff28cf857785b14d77fe8c2a2f5c1a717aeb5b not found: ID does not exist" Feb 18 00:48:40 crc kubenswrapper[4858]: I0218 00:48:40.730356 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-t4qg5"] Feb 18 00:48:40 crc kubenswrapper[4858]: I0218 00:48:40.739395 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-t4qg5"] Feb 18 00:48:41 crc kubenswrapper[4858]: I0218 00:48:41.430163 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="199f5218-a364-4a01-b7e2-a08cde12e306" path="/var/lib/kubelet/pods/199f5218-a364-4a01-b7e2-a08cde12e306/volumes" Feb 18 00:48:48 crc kubenswrapper[4858]: I0218 00:48:48.501611 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-sq8rn" Feb 18 00:48:48 crc kubenswrapper[4858]: I0218 00:48:48.502298 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-sq8rn" Feb 18 00:48:48 crc kubenswrapper[4858]: I0218 00:48:48.542139 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-sq8rn" Feb 18 00:48:48 crc kubenswrapper[4858]: I0218 00:48:48.771275 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-sq8rn" Feb 18 00:48:50 crc kubenswrapper[4858]: I0218 00:48:50.618907 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck"] Feb 18 00:48:50 crc kubenswrapper[4858]: E0218 00:48:50.619393 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="199f5218-a364-4a01-b7e2-a08cde12e306" containerName="registry-server" Feb 18 00:48:50 crc kubenswrapper[4858]: I0218 00:48:50.619405 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="199f5218-a364-4a01-b7e2-a08cde12e306" containerName="registry-server" Feb 18 00:48:50 crc kubenswrapper[4858]: I0218 00:48:50.619528 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="199f5218-a364-4a01-b7e2-a08cde12e306" containerName="registry-server" Feb 18 00:48:50 crc kubenswrapper[4858]: I0218 00:48:50.620307 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck" Feb 18 00:48:50 crc kubenswrapper[4858]: I0218 00:48:50.625813 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-v8fqf" Feb 18 00:48:50 crc kubenswrapper[4858]: I0218 00:48:50.636445 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck"] Feb 18 00:48:50 crc kubenswrapper[4858]: I0218 00:48:50.817745 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b97cc05-751a-49e4-b75b-7f2606d14fdf-util\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck\" (UID: \"3b97cc05-751a-49e4-b75b-7f2606d14fdf\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck" Feb 18 00:48:50 crc kubenswrapper[4858]: I0218 00:48:50.817839 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-789vs\" (UniqueName: \"kubernetes.io/projected/3b97cc05-751a-49e4-b75b-7f2606d14fdf-kube-api-access-789vs\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck\" (UID: \"3b97cc05-751a-49e4-b75b-7f2606d14fdf\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck" Feb 18 00:48:50 crc kubenswrapper[4858]: I0218 00:48:50.817888 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b97cc05-751a-49e4-b75b-7f2606d14fdf-bundle\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck\" (UID: \"3b97cc05-751a-49e4-b75b-7f2606d14fdf\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck" Feb 18 00:48:50 crc kubenswrapper[4858]: I0218 00:48:50.919655 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b97cc05-751a-49e4-b75b-7f2606d14fdf-util\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck\" (UID: \"3b97cc05-751a-49e4-b75b-7f2606d14fdf\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck" Feb 18 00:48:50 crc kubenswrapper[4858]: I0218 00:48:50.919748 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-789vs\" (UniqueName: \"kubernetes.io/projected/3b97cc05-751a-49e4-b75b-7f2606d14fdf-kube-api-access-789vs\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck\" (UID: \"3b97cc05-751a-49e4-b75b-7f2606d14fdf\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck" Feb 18 00:48:50 crc kubenswrapper[4858]: I0218 00:48:50.919826 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b97cc05-751a-49e4-b75b-7f2606d14fdf-bundle\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck\" (UID: \"3b97cc05-751a-49e4-b75b-7f2606d14fdf\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck" Feb 18 00:48:50 crc kubenswrapper[4858]: I0218 00:48:50.920712 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b97cc05-751a-49e4-b75b-7f2606d14fdf-util\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck\" (UID: \"3b97cc05-751a-49e4-b75b-7f2606d14fdf\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck" Feb 18 00:48:50 crc kubenswrapper[4858]: I0218 00:48:50.920856 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b97cc05-751a-49e4-b75b-7f2606d14fdf-bundle\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck\" (UID: \"3b97cc05-751a-49e4-b75b-7f2606d14fdf\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck" Feb 18 00:48:50 crc kubenswrapper[4858]: I0218 00:48:50.946469 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-789vs\" (UniqueName: \"kubernetes.io/projected/3b97cc05-751a-49e4-b75b-7f2606d14fdf-kube-api-access-789vs\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck\" (UID: \"3b97cc05-751a-49e4-b75b-7f2606d14fdf\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck" Feb 18 00:48:51 crc kubenswrapper[4858]: I0218 00:48:51.007357 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck" Feb 18 00:48:51 crc kubenswrapper[4858]: I0218 00:48:51.566984 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck"] Feb 18 00:48:51 crc kubenswrapper[4858]: W0218 00:48:51.571855 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b97cc05_751a_49e4_b75b_7f2606d14fdf.slice/crio-a88219c26afd3551d87f1cf3fc5b12224ff9ca4182d87d0bd311805be415cc83 WatchSource:0}: Error finding container a88219c26afd3551d87f1cf3fc5b12224ff9ca4182d87d0bd311805be415cc83: Status 404 returned error can't find the container with id a88219c26afd3551d87f1cf3fc5b12224ff9ca4182d87d0bd311805be415cc83 Feb 18 00:48:51 crc kubenswrapper[4858]: I0218 00:48:51.751738 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck" event={"ID":"3b97cc05-751a-49e4-b75b-7f2606d14fdf","Type":"ContainerStarted","Data":"a88219c26afd3551d87f1cf3fc5b12224ff9ca4182d87d0bd311805be415cc83"} Feb 18 00:48:52 crc kubenswrapper[4858]: I0218 00:48:52.762642 4858 generic.go:334] "Generic (PLEG): container finished" podID="3b97cc05-751a-49e4-b75b-7f2606d14fdf" containerID="a78d7db1d5884252d0b4011c0ef14bd25753fb85525ea9b9fe68675fe574f01d" exitCode=0 Feb 18 00:48:52 crc kubenswrapper[4858]: I0218 00:48:52.762751 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck" event={"ID":"3b97cc05-751a-49e4-b75b-7f2606d14fdf","Type":"ContainerDied","Data":"a78d7db1d5884252d0b4011c0ef14bd25753fb85525ea9b9fe68675fe574f01d"} Feb 18 00:48:52 crc kubenswrapper[4858]: I0218 00:48:52.766289 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 00:48:54 crc kubenswrapper[4858]: I0218 00:48:54.778619 4858 generic.go:334] "Generic (PLEG): container finished" podID="3b97cc05-751a-49e4-b75b-7f2606d14fdf" containerID="923cf8ec4b09b0661e44d1c3d09c6e720dd90faaabca6306087689211073b41e" exitCode=0 Feb 18 00:48:54 crc kubenswrapper[4858]: I0218 00:48:54.778705 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck" event={"ID":"3b97cc05-751a-49e4-b75b-7f2606d14fdf","Type":"ContainerDied","Data":"923cf8ec4b09b0661e44d1c3d09c6e720dd90faaabca6306087689211073b41e"} Feb 18 00:48:55 crc kubenswrapper[4858]: I0218 00:48:55.265291 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:48:55 crc kubenswrapper[4858]: I0218 00:48:55.265636 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:48:55 crc kubenswrapper[4858]: I0218 00:48:55.790236 4858 generic.go:334] "Generic (PLEG): container finished" podID="3b97cc05-751a-49e4-b75b-7f2606d14fdf" containerID="afd689482ca5364fe581c98936f749f37cb5d57ed580c128175c89d8107dede4" exitCode=0 Feb 18 00:48:55 crc kubenswrapper[4858]: I0218 00:48:55.790284 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck" event={"ID":"3b97cc05-751a-49e4-b75b-7f2606d14fdf","Type":"ContainerDied","Data":"afd689482ca5364fe581c98936f749f37cb5d57ed580c128175c89d8107dede4"} Feb 18 00:48:57 crc kubenswrapper[4858]: I0218 00:48:57.142700 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck" Feb 18 00:48:57 crc kubenswrapper[4858]: I0218 00:48:57.229186 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b97cc05-751a-49e4-b75b-7f2606d14fdf-bundle\") pod \"3b97cc05-751a-49e4-b75b-7f2606d14fdf\" (UID: \"3b97cc05-751a-49e4-b75b-7f2606d14fdf\") " Feb 18 00:48:57 crc kubenswrapper[4858]: I0218 00:48:57.229261 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-789vs\" (UniqueName: \"kubernetes.io/projected/3b97cc05-751a-49e4-b75b-7f2606d14fdf-kube-api-access-789vs\") pod \"3b97cc05-751a-49e4-b75b-7f2606d14fdf\" (UID: \"3b97cc05-751a-49e4-b75b-7f2606d14fdf\") " Feb 18 00:48:57 crc kubenswrapper[4858]: I0218 00:48:57.229384 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b97cc05-751a-49e4-b75b-7f2606d14fdf-util\") pod \"3b97cc05-751a-49e4-b75b-7f2606d14fdf\" (UID: \"3b97cc05-751a-49e4-b75b-7f2606d14fdf\") " Feb 18 00:48:57 crc kubenswrapper[4858]: I0218 00:48:57.231753 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b97cc05-751a-49e4-b75b-7f2606d14fdf-bundle" (OuterVolumeSpecName: "bundle") pod "3b97cc05-751a-49e4-b75b-7f2606d14fdf" (UID: "3b97cc05-751a-49e4-b75b-7f2606d14fdf"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:48:57 crc kubenswrapper[4858]: I0218 00:48:57.238012 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b97cc05-751a-49e4-b75b-7f2606d14fdf-kube-api-access-789vs" (OuterVolumeSpecName: "kube-api-access-789vs") pod "3b97cc05-751a-49e4-b75b-7f2606d14fdf" (UID: "3b97cc05-751a-49e4-b75b-7f2606d14fdf"). InnerVolumeSpecName "kube-api-access-789vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:48:57 crc kubenswrapper[4858]: I0218 00:48:57.265626 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b97cc05-751a-49e4-b75b-7f2606d14fdf-util" (OuterVolumeSpecName: "util") pod "3b97cc05-751a-49e4-b75b-7f2606d14fdf" (UID: "3b97cc05-751a-49e4-b75b-7f2606d14fdf"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:48:57 crc kubenswrapper[4858]: I0218 00:48:57.331275 4858 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b97cc05-751a-49e4-b75b-7f2606d14fdf-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:57 crc kubenswrapper[4858]: I0218 00:48:57.331344 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-789vs\" (UniqueName: \"kubernetes.io/projected/3b97cc05-751a-49e4-b75b-7f2606d14fdf-kube-api-access-789vs\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:57 crc kubenswrapper[4858]: I0218 00:48:57.331367 4858 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b97cc05-751a-49e4-b75b-7f2606d14fdf-util\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:57 crc kubenswrapper[4858]: I0218 00:48:57.811950 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck" event={"ID":"3b97cc05-751a-49e4-b75b-7f2606d14fdf","Type":"ContainerDied","Data":"a88219c26afd3551d87f1cf3fc5b12224ff9ca4182d87d0bd311805be415cc83"} Feb 18 00:48:57 crc kubenswrapper[4858]: I0218 00:48:57.812356 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a88219c26afd3551d87f1cf3fc5b12224ff9ca4182d87d0bd311805be415cc83" Feb 18 00:48:57 crc kubenswrapper[4858]: I0218 00:48:57.812036 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck" Feb 18 00:49:03 crc kubenswrapper[4858]: I0218 00:49:03.331436 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-69ff8ccd5-kwxmx"] Feb 18 00:49:03 crc kubenswrapper[4858]: E0218 00:49:03.331960 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b97cc05-751a-49e4-b75b-7f2606d14fdf" containerName="pull" Feb 18 00:49:03 crc kubenswrapper[4858]: I0218 00:49:03.331974 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b97cc05-751a-49e4-b75b-7f2606d14fdf" containerName="pull" Feb 18 00:49:03 crc kubenswrapper[4858]: E0218 00:49:03.331986 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b97cc05-751a-49e4-b75b-7f2606d14fdf" containerName="extract" Feb 18 00:49:03 crc kubenswrapper[4858]: I0218 00:49:03.331994 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b97cc05-751a-49e4-b75b-7f2606d14fdf" containerName="extract" Feb 18 00:49:03 crc kubenswrapper[4858]: E0218 00:49:03.332011 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b97cc05-751a-49e4-b75b-7f2606d14fdf" containerName="util" Feb 18 00:49:03 crc kubenswrapper[4858]: I0218 00:49:03.332020 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b97cc05-751a-49e4-b75b-7f2606d14fdf" containerName="util" Feb 18 00:49:03 crc kubenswrapper[4858]: I0218 00:49:03.332167 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b97cc05-751a-49e4-b75b-7f2606d14fdf" containerName="extract" Feb 18 00:49:03 crc kubenswrapper[4858]: I0218 00:49:03.332649 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-69ff8ccd5-kwxmx" Feb 18 00:49:03 crc kubenswrapper[4858]: I0218 00:49:03.335805 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-mjmpv" Feb 18 00:49:03 crc kubenswrapper[4858]: I0218 00:49:03.757705 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsbmp\" (UniqueName: \"kubernetes.io/projected/05ed6418-42b7-4994-9e6b-ced846840c80-kube-api-access-tsbmp\") pod \"openstack-operator-controller-init-69ff8ccd5-kwxmx\" (UID: \"05ed6418-42b7-4994-9e6b-ced846840c80\") " pod="openstack-operators/openstack-operator-controller-init-69ff8ccd5-kwxmx" Feb 18 00:49:03 crc kubenswrapper[4858]: I0218 00:49:03.859066 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsbmp\" (UniqueName: \"kubernetes.io/projected/05ed6418-42b7-4994-9e6b-ced846840c80-kube-api-access-tsbmp\") pod \"openstack-operator-controller-init-69ff8ccd5-kwxmx\" (UID: \"05ed6418-42b7-4994-9e6b-ced846840c80\") " pod="openstack-operators/openstack-operator-controller-init-69ff8ccd5-kwxmx" Feb 18 00:49:03 crc kubenswrapper[4858]: I0218 00:49:03.894800 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsbmp\" (UniqueName: \"kubernetes.io/projected/05ed6418-42b7-4994-9e6b-ced846840c80-kube-api-access-tsbmp\") pod \"openstack-operator-controller-init-69ff8ccd5-kwxmx\" (UID: \"05ed6418-42b7-4994-9e6b-ced846840c80\") " pod="openstack-operators/openstack-operator-controller-init-69ff8ccd5-kwxmx" Feb 18 00:49:04 crc kubenswrapper[4858]: I0218 00:49:04.092438 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-69ff8ccd5-kwxmx" Feb 18 00:49:04 crc kubenswrapper[4858]: I0218 00:49:04.093759 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-69ff8ccd5-kwxmx"] Feb 18 00:49:04 crc kubenswrapper[4858]: I0218 00:49:04.472197 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-69ff8ccd5-kwxmx"] Feb 18 00:49:05 crc kubenswrapper[4858]: I0218 00:49:05.047900 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-69ff8ccd5-kwxmx" event={"ID":"05ed6418-42b7-4994-9e6b-ced846840c80","Type":"ContainerStarted","Data":"66265a7898ecb328d4cbd16e60cbd54e39858793ee9b21da90ac48496d057b99"} Feb 18 00:49:09 crc kubenswrapper[4858]: I0218 00:49:09.081450 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-69ff8ccd5-kwxmx" event={"ID":"05ed6418-42b7-4994-9e6b-ced846840c80","Type":"ContainerStarted","Data":"db9526019d775d45975f36b79403fc94409ce7edb1b592c8f1c78b7328c55f7e"} Feb 18 00:49:09 crc kubenswrapper[4858]: I0218 00:49:09.082922 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-69ff8ccd5-kwxmx" Feb 18 00:49:09 crc kubenswrapper[4858]: I0218 00:49:09.123923 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-69ff8ccd5-kwxmx" podStartSLOduration=2.193202414 podStartE2EDuration="6.123898193s" podCreationTimestamp="2026-02-18 00:49:03 +0000 UTC" firstStartedPulling="2026-02-18 00:49:04.480662909 +0000 UTC m=+897.786499641" lastFinishedPulling="2026-02-18 00:49:08.411358648 +0000 UTC m=+901.717195420" observedRunningTime="2026-02-18 00:49:09.115692483 +0000 UTC m=+902.421529255" watchObservedRunningTime="2026-02-18 00:49:09.123898193 +0000 UTC m=+902.429734965" Feb 18 00:49:14 crc kubenswrapper[4858]: I0218 00:49:14.096691 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-69ff8ccd5-kwxmx" Feb 18 00:49:25 crc kubenswrapper[4858]: I0218 00:49:25.266307 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:49:25 crc kubenswrapper[4858]: I0218 00:49:25.266640 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.176697 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-8hqkm"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.178044 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-8hqkm" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.181744 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-srplk" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.187137 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-8hqkm"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.203004 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-lrmvx"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.203927 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lrmvx" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.206871 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-zqf2l" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.231056 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-lrmvx"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.251872 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb4m9\" (UniqueName: \"kubernetes.io/projected/e28fd875-635a-43eb-ae2e-2544aa39cc84-kube-api-access-jb4m9\") pod \"barbican-operator-controller-manager-868647ff47-8hqkm\" (UID: \"e28fd875-635a-43eb-ae2e-2544aa39cc84\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-8hqkm" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.251920 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrmtz\" (UniqueName: \"kubernetes.io/projected/c33cc4eb-a44e-4b2f-8ea8-1688d831a12a-kube-api-access-jrmtz\") pod \"cinder-operator-controller-manager-5d946d989d-lrmvx\" (UID: \"c33cc4eb-a44e-4b2f-8ea8-1688d831a12a\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lrmvx" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.252572 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-bkng8"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.253356 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bkng8" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.258819 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-ksv8b"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.259555 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-ksv8b" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.259981 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-fcc48" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.262789 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-zqtn5" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.268272 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-ksv8b"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.272460 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-bkng8"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.278352 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-74zsv"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.279226 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-74zsv" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.280837 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-9s4wk" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.292644 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rlds4"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.293423 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rlds4" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.299977 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-k7rqt" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.303877 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-74zsv"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.314348 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rlds4"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.329630 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-ndk6f"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.331185 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-ndk6f" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.338155 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-2hjnj" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.338378 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.350603 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-rzzqb"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.351662 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-rzzqb" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.353126 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-j5fwc" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.353247 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz8fl\" (UniqueName: \"kubernetes.io/projected/758bf8e1-fe1b-4c02-8ad8-6d80237e0024-kube-api-access-tz8fl\") pod \"ironic-operator-controller-manager-554564d7fc-rzzqb\" (UID: \"758bf8e1-fe1b-4c02-8ad8-6d80237e0024\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-rzzqb" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.353308 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58a2adef-c01f-464e-aa1d-8c2d8a6e5c58-cert\") pod \"infra-operator-controller-manager-79d975b745-ndk6f\" (UID: \"58a2adef-c01f-464e-aa1d-8c2d8a6e5c58\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-ndk6f" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.353351 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb4m9\" (UniqueName: \"kubernetes.io/projected/e28fd875-635a-43eb-ae2e-2544aa39cc84-kube-api-access-jb4m9\") pod \"barbican-operator-controller-manager-868647ff47-8hqkm\" (UID: \"e28fd875-635a-43eb-ae2e-2544aa39cc84\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-8hqkm" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.353386 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrmtz\" (UniqueName: \"kubernetes.io/projected/c33cc4eb-a44e-4b2f-8ea8-1688d831a12a-kube-api-access-jrmtz\") pod \"cinder-operator-controller-manager-5d946d989d-lrmvx\" (UID: \"c33cc4eb-a44e-4b2f-8ea8-1688d831a12a\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lrmvx" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.353430 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjfv7\" (UniqueName: \"kubernetes.io/projected/b0ca0509-6112-4163-a060-ea15122be64a-kube-api-access-rjfv7\") pod \"horizon-operator-controller-manager-5b9b8895d5-rlds4\" (UID: \"b0ca0509-6112-4163-a060-ea15122be64a\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rlds4" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.353518 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvvj7\" (UniqueName: \"kubernetes.io/projected/ca07929f-e6f1-4b35-bcd8-b8a8c2fa6ce6-kube-api-access-zvvj7\") pod \"designate-operator-controller-manager-6d8bf5c495-bkng8\" (UID: \"ca07929f-e6f1-4b35-bcd8-b8a8c2fa6ce6\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bkng8" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.353566 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gb7p\" (UniqueName: \"kubernetes.io/projected/597262ab-929d-4c51-8400-d6a6df47dcbd-kube-api-access-9gb7p\") pod \"glance-operator-controller-manager-77987464f4-ksv8b\" (UID: \"597262ab-929d-4c51-8400-d6a6df47dcbd\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-ksv8b" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.353598 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24t8j\" (UniqueName: \"kubernetes.io/projected/9df9a5db-2273-4253-9b76-b67377d8f7f6-kube-api-access-24t8j\") pod \"heat-operator-controller-manager-69f49c598c-74zsv\" (UID: \"9df9a5db-2273-4253-9b76-b67377d8f7f6\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-74zsv" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.353636 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kphsc\" (UniqueName: \"kubernetes.io/projected/58a2adef-c01f-464e-aa1d-8c2d8a6e5c58-kube-api-access-kphsc\") pod \"infra-operator-controller-manager-79d975b745-ndk6f\" (UID: \"58a2adef-c01f-464e-aa1d-8c2d8a6e5c58\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-ndk6f" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.355191 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-ndk6f"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.362722 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-rzzqb"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.368747 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-qxqhh"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.369738 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-qxqhh" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.398556 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-rk2wx" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.399305 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb4m9\" (UniqueName: \"kubernetes.io/projected/e28fd875-635a-43eb-ae2e-2544aa39cc84-kube-api-access-jb4m9\") pod \"barbican-operator-controller-manager-868647ff47-8hqkm\" (UID: \"e28fd875-635a-43eb-ae2e-2544aa39cc84\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-8hqkm" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.399359 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrmtz\" (UniqueName: \"kubernetes.io/projected/c33cc4eb-a44e-4b2f-8ea8-1688d831a12a-kube-api-access-jrmtz\") pod \"cinder-operator-controller-manager-5d946d989d-lrmvx\" (UID: \"c33cc4eb-a44e-4b2f-8ea8-1688d831a12a\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lrmvx" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.440972 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-kvqvz"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.445195 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-kvqvz" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.462584 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-qxqhh"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.463973 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjfv7\" (UniqueName: \"kubernetes.io/projected/b0ca0509-6112-4163-a060-ea15122be64a-kube-api-access-rjfv7\") pod \"horizon-operator-controller-manager-5b9b8895d5-rlds4\" (UID: \"b0ca0509-6112-4163-a060-ea15122be64a\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rlds4" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.464097 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvvj7\" (UniqueName: \"kubernetes.io/projected/ca07929f-e6f1-4b35-bcd8-b8a8c2fa6ce6-kube-api-access-zvvj7\") pod \"designate-operator-controller-manager-6d8bf5c495-bkng8\" (UID: \"ca07929f-e6f1-4b35-bcd8-b8a8c2fa6ce6\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bkng8" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.464205 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gb7p\" (UniqueName: \"kubernetes.io/projected/597262ab-929d-4c51-8400-d6a6df47dcbd-kube-api-access-9gb7p\") pod \"glance-operator-controller-manager-77987464f4-ksv8b\" (UID: \"597262ab-929d-4c51-8400-d6a6df47dcbd\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-ksv8b" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.464448 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24t8j\" (UniqueName: \"kubernetes.io/projected/9df9a5db-2273-4253-9b76-b67377d8f7f6-kube-api-access-24t8j\") pod \"heat-operator-controller-manager-69f49c598c-74zsv\" (UID: \"9df9a5db-2273-4253-9b76-b67377d8f7f6\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-74zsv" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.464503 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kphsc\" (UniqueName: \"kubernetes.io/projected/58a2adef-c01f-464e-aa1d-8c2d8a6e5c58-kube-api-access-kphsc\") pod \"infra-operator-controller-manager-79d975b745-ndk6f\" (UID: \"58a2adef-c01f-464e-aa1d-8c2d8a6e5c58\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-ndk6f" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.465006 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz8fl\" (UniqueName: \"kubernetes.io/projected/758bf8e1-fe1b-4c02-8ad8-6d80237e0024-kube-api-access-tz8fl\") pod \"ironic-operator-controller-manager-554564d7fc-rzzqb\" (UID: \"758bf8e1-fe1b-4c02-8ad8-6d80237e0024\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-rzzqb" Feb 18 00:49:34 crc kubenswrapper[4858]: E0218 00:49:34.465154 4858 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 00:49:34 crc kubenswrapper[4858]: E0218 00:49:34.465202 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58a2adef-c01f-464e-aa1d-8c2d8a6e5c58-cert podName:58a2adef-c01f-464e-aa1d-8c2d8a6e5c58 nodeName:}" failed. No retries permitted until 2026-02-18 00:49:34.965182382 +0000 UTC m=+928.271019114 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58a2adef-c01f-464e-aa1d-8c2d8a6e5c58-cert") pod "infra-operator-controller-manager-79d975b745-ndk6f" (UID: "58a2adef-c01f-464e-aa1d-8c2d8a6e5c58") : secret "infra-operator-webhook-server-cert" not found Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.465063 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58a2adef-c01f-464e-aa1d-8c2d8a6e5c58-cert\") pod \"infra-operator-controller-manager-79d975b745-ndk6f\" (UID: \"58a2adef-c01f-464e-aa1d-8c2d8a6e5c58\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-ndk6f" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.481863 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-bgxlj" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.506392 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-8hqkm" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.513712 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24t8j\" (UniqueName: \"kubernetes.io/projected/9df9a5db-2273-4253-9b76-b67377d8f7f6-kube-api-access-24t8j\") pod \"heat-operator-controller-manager-69f49c598c-74zsv\" (UID: \"9df9a5db-2273-4253-9b76-b67377d8f7f6\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-74zsv" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.515087 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz8fl\" (UniqueName: \"kubernetes.io/projected/758bf8e1-fe1b-4c02-8ad8-6d80237e0024-kube-api-access-tz8fl\") pod \"ironic-operator-controller-manager-554564d7fc-rzzqb\" (UID: \"758bf8e1-fe1b-4c02-8ad8-6d80237e0024\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-rzzqb" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.516185 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-kvqvz"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.516405 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kphsc\" (UniqueName: \"kubernetes.io/projected/58a2adef-c01f-464e-aa1d-8c2d8a6e5c58-kube-api-access-kphsc\") pod \"infra-operator-controller-manager-79d975b745-ndk6f\" (UID: \"58a2adef-c01f-464e-aa1d-8c2d8a6e5c58\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-ndk6f" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.521028 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvvj7\" (UniqueName: \"kubernetes.io/projected/ca07929f-e6f1-4b35-bcd8-b8a8c2fa6ce6-kube-api-access-zvvj7\") pod \"designate-operator-controller-manager-6d8bf5c495-bkng8\" (UID: \"ca07929f-e6f1-4b35-bcd8-b8a8c2fa6ce6\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bkng8" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.524899 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lrmvx" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.528701 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gb7p\" (UniqueName: \"kubernetes.io/projected/597262ab-929d-4c51-8400-d6a6df47dcbd-kube-api-access-9gb7p\") pod \"glance-operator-controller-manager-77987464f4-ksv8b\" (UID: \"597262ab-929d-4c51-8400-d6a6df47dcbd\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-ksv8b" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.532143 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjfv7\" (UniqueName: \"kubernetes.io/projected/b0ca0509-6112-4163-a060-ea15122be64a-kube-api-access-rjfv7\") pod \"horizon-operator-controller-manager-5b9b8895d5-rlds4\" (UID: \"b0ca0509-6112-4163-a060-ea15122be64a\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rlds4" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.567084 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrfq2\" (UniqueName: \"kubernetes.io/projected/bddd921f-895d-4b1d-8203-2aff8a721ed9-kube-api-access-zrfq2\") pod \"manila-operator-controller-manager-54f6768c69-kvqvz\" (UID: \"bddd921f-895d-4b1d-8203-2aff8a721ed9\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-kvqvz" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.567157 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gn58\" (UniqueName: \"kubernetes.io/projected/f5dba120-621f-4686-8e83-6f10779d8cfb-kube-api-access-7gn58\") pod \"keystone-operator-controller-manager-b4d948c87-qxqhh\" (UID: \"f5dba120-621f-4686-8e83-6f10779d8cfb\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-qxqhh" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.573581 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bkng8" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.583024 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-xwgm9"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.583810 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xwgm9" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.586015 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-f7p9j" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.587126 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-ksv8b" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.590013 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qqgpg"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.601868 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qqgpg" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.603975 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-lhgdt" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.605292 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-8v5bz"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.605974 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8v5bz" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.609027 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-9k9jz" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.609707 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-74zsv" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.618483 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-xwgm9"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.622194 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rlds4" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.634215 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-8v5bz"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.656593 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qqgpg"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.656652 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-dm2f9"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.657339 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dm2f9" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.663978 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-dm2f9"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.673791 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-9k4wv"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.674164 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-xxsxw" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.674865 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-9k4wv" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.673798 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrfq2\" (UniqueName: \"kubernetes.io/projected/bddd921f-895d-4b1d-8203-2aff8a721ed9-kube-api-access-zrfq2\") pod \"manila-operator-controller-manager-54f6768c69-kvqvz\" (UID: \"bddd921f-895d-4b1d-8203-2aff8a721ed9\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-kvqvz" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.675769 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gn58\" (UniqueName: \"kubernetes.io/projected/f5dba120-621f-4686-8e83-6f10779d8cfb-kube-api-access-7gn58\") pod \"keystone-operator-controller-manager-b4d948c87-qxqhh\" (UID: \"f5dba120-621f-4686-8e83-6f10779d8cfb\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-qxqhh" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.677690 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-4mbt6" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.684572 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.685391 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-9k4wv"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.685471 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.689144 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-5b4nx"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.689966 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5b4nx" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.693745 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gn58\" (UniqueName: \"kubernetes.io/projected/f5dba120-621f-4686-8e83-6f10779d8cfb-kube-api-access-7gn58\") pod \"keystone-operator-controller-manager-b4d948c87-qxqhh\" (UID: \"f5dba120-621f-4686-8e83-6f10779d8cfb\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-qxqhh" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.694205 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-w96x2" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.694310 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.694553 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-5bh8s" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.695308 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrfq2\" (UniqueName: \"kubernetes.io/projected/bddd921f-895d-4b1d-8203-2aff8a721ed9-kube-api-access-zrfq2\") pod \"manila-operator-controller-manager-54f6768c69-kvqvz\" (UID: \"bddd921f-895d-4b1d-8203-2aff8a721ed9\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-kvqvz" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.714268 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.734641 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-rzzqb" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.738222 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-5b4nx"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.766296 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-xhtjl"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.767308 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-xhtjl" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.771862 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-bwshm" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.776974 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlxg6\" (UniqueName: \"kubernetes.io/projected/f3e44d9b-6d44-4aa9-9100-c2e139131ec9-kube-api-access-wlxg6\") pod \"octavia-operator-controller-manager-69f8888797-dm2f9\" (UID: \"f3e44d9b-6d44-4aa9-9100-c2e139131ec9\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dm2f9" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.777057 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bdz5\" (UniqueName: \"kubernetes.io/projected/11bc7389-c53b-4030-892b-43da85d70fe1-kube-api-access-8bdz5\") pod \"nova-operator-controller-manager-567668f5cf-8v5bz\" (UID: \"11bc7389-c53b-4030-892b-43da85d70fe1\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8v5bz" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.777091 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j2qd\" (UniqueName: \"kubernetes.io/projected/dda54f36-cfc8-468e-8101-f8041735931f-kube-api-access-2j2qd\") pod \"neutron-operator-controller-manager-64ddbf8bb-qqgpg\" (UID: \"dda54f36-cfc8-468e-8101-f8041735931f\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qqgpg" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.777121 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngm9c\" (UniqueName: \"kubernetes.io/projected/860622ee-6268-4ff0-a2ae-403ae8b984fc-kube-api-access-ngm9c\") pod \"ovn-operator-controller-manager-d44cf6b75-9k4wv\" (UID: \"860622ee-6268-4ff0-a2ae-403ae8b984fc\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-9k4wv" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.777158 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkbwk\" (UniqueName: \"kubernetes.io/projected/28b5bfad-085d-48c6-b15f-c431d57de698-kube-api-access-zkbwk\") pod \"mariadb-operator-controller-manager-6994f66f48-xwgm9\" (UID: \"28b5bfad-085d-48c6-b15f-c431d57de698\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xwgm9" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.781015 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-qxqhh" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.801173 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-xhtjl"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.820766 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-c6f9cb8b-f7txj"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.829244 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-c6f9cb8b-f7txj" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.837709 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-hqwhm" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.874465 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-c6f9cb8b-f7txj"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.884036 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngm9c\" (UniqueName: \"kubernetes.io/projected/860622ee-6268-4ff0-a2ae-403ae8b984fc-kube-api-access-ngm9c\") pod \"ovn-operator-controller-manager-d44cf6b75-9k4wv\" (UID: \"860622ee-6268-4ff0-a2ae-403ae8b984fc\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-9k4wv" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.884082 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkbwk\" (UniqueName: \"kubernetes.io/projected/28b5bfad-085d-48c6-b15f-c431d57de698-kube-api-access-zkbwk\") pod \"mariadb-operator-controller-manager-6994f66f48-xwgm9\" (UID: \"28b5bfad-085d-48c6-b15f-c431d57de698\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xwgm9" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.884139 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/229552d0-e72e-49af-a4c7-6052e2a7bf5a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn\" (UID: \"229552d0-e72e-49af-a4c7-6052e2a7bf5a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.884168 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlxg6\" (UniqueName: \"kubernetes.io/projected/f3e44d9b-6d44-4aa9-9100-c2e139131ec9-kube-api-access-wlxg6\") pod \"octavia-operator-controller-manager-69f8888797-dm2f9\" (UID: \"f3e44d9b-6d44-4aa9-9100-c2e139131ec9\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dm2f9" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.884195 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzxr7\" (UniqueName: \"kubernetes.io/projected/447c1cfc-d76f-4985-bd95-285a3fbc63cc-kube-api-access-tzxr7\") pod \"placement-operator-controller-manager-8497b45c89-5b4nx\" (UID: \"447c1cfc-d76f-4985-bd95-285a3fbc63cc\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5b4nx" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.884229 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk98m\" (UniqueName: \"kubernetes.io/projected/229552d0-e72e-49af-a4c7-6052e2a7bf5a-kube-api-access-qk98m\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn\" (UID: \"229552d0-e72e-49af-a4c7-6052e2a7bf5a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.884269 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bdz5\" (UniqueName: \"kubernetes.io/projected/11bc7389-c53b-4030-892b-43da85d70fe1-kube-api-access-8bdz5\") pod \"nova-operator-controller-manager-567668f5cf-8v5bz\" (UID: \"11bc7389-c53b-4030-892b-43da85d70fe1\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8v5bz" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.884300 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j2qd\" (UniqueName: \"kubernetes.io/projected/dda54f36-cfc8-468e-8101-f8041735931f-kube-api-access-2j2qd\") pod \"neutron-operator-controller-manager-64ddbf8bb-qqgpg\" (UID: \"dda54f36-cfc8-468e-8101-f8041735931f\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qqgpg" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.884325 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mhxv\" (UniqueName: \"kubernetes.io/projected/eae2173c-97fd-4d89-8d72-0d44f7c87f9b-kube-api-access-5mhxv\") pod \"swift-operator-controller-manager-68f46476f-xhtjl\" (UID: \"eae2173c-97fd-4d89-8d72-0d44f7c87f9b\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-xhtjl" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.903244 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j2qd\" (UniqueName: \"kubernetes.io/projected/dda54f36-cfc8-468e-8101-f8041735931f-kube-api-access-2j2qd\") pod \"neutron-operator-controller-manager-64ddbf8bb-qqgpg\" (UID: \"dda54f36-cfc8-468e-8101-f8041735931f\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qqgpg" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.903457 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-kvqvz" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.905125 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkbwk\" (UniqueName: \"kubernetes.io/projected/28b5bfad-085d-48c6-b15f-c431d57de698-kube-api-access-zkbwk\") pod \"mariadb-operator-controller-manager-6994f66f48-xwgm9\" (UID: \"28b5bfad-085d-48c6-b15f-c431d57de698\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xwgm9" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.906967 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bdz5\" (UniqueName: \"kubernetes.io/projected/11bc7389-c53b-4030-892b-43da85d70fe1-kube-api-access-8bdz5\") pod \"nova-operator-controller-manager-567668f5cf-8v5bz\" (UID: \"11bc7389-c53b-4030-892b-43da85d70fe1\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8v5bz" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.916144 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngm9c\" (UniqueName: \"kubernetes.io/projected/860622ee-6268-4ff0-a2ae-403ae8b984fc-kube-api-access-ngm9c\") pod \"ovn-operator-controller-manager-d44cf6b75-9k4wv\" (UID: \"860622ee-6268-4ff0-a2ae-403ae8b984fc\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-9k4wv" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.924816 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xwgm9" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.926291 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlxg6\" (UniqueName: \"kubernetes.io/projected/f3e44d9b-6d44-4aa9-9100-c2e139131ec9-kube-api-access-wlxg6\") pod \"octavia-operator-controller-manager-69f8888797-dm2f9\" (UID: \"f3e44d9b-6d44-4aa9-9100-c2e139131ec9\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dm2f9" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.985940 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mhxv\" (UniqueName: \"kubernetes.io/projected/eae2173c-97fd-4d89-8d72-0d44f7c87f9b-kube-api-access-5mhxv\") pod \"swift-operator-controller-manager-68f46476f-xhtjl\" (UID: \"eae2173c-97fd-4d89-8d72-0d44f7c87f9b\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-xhtjl" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.986014 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58a2adef-c01f-464e-aa1d-8c2d8a6e5c58-cert\") pod \"infra-operator-controller-manager-79d975b745-ndk6f\" (UID: \"58a2adef-c01f-464e-aa1d-8c2d8a6e5c58\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-ndk6f" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.986035 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/229552d0-e72e-49af-a4c7-6052e2a7bf5a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn\" (UID: \"229552d0-e72e-49af-a4c7-6052e2a7bf5a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.986072 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzxr7\" (UniqueName: \"kubernetes.io/projected/447c1cfc-d76f-4985-bd95-285a3fbc63cc-kube-api-access-tzxr7\") pod \"placement-operator-controller-manager-8497b45c89-5b4nx\" (UID: \"447c1cfc-d76f-4985-bd95-285a3fbc63cc\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5b4nx" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.986107 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmjv4\" (UniqueName: \"kubernetes.io/projected/e60cf8fd-9033-4f85-a2a1-16441bd58a56-kube-api-access-hmjv4\") pod \"telemetry-operator-controller-manager-c6f9cb8b-f7txj\" (UID: \"e60cf8fd-9033-4f85-a2a1-16441bd58a56\") " pod="openstack-operators/telemetry-operator-controller-manager-c6f9cb8b-f7txj" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.986125 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk98m\" (UniqueName: \"kubernetes.io/projected/229552d0-e72e-49af-a4c7-6052e2a7bf5a-kube-api-access-qk98m\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn\" (UID: \"229552d0-e72e-49af-a4c7-6052e2a7bf5a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn" Feb 18 00:49:34 crc kubenswrapper[4858]: E0218 00:49:34.986560 4858 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 00:49:34 crc kubenswrapper[4858]: E0218 00:49:34.986599 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58a2adef-c01f-464e-aa1d-8c2d8a6e5c58-cert podName:58a2adef-c01f-464e-aa1d-8c2d8a6e5c58 nodeName:}" failed. No retries permitted until 2026-02-18 00:49:35.986585027 +0000 UTC m=+929.292421759 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58a2adef-c01f-464e-aa1d-8c2d8a6e5c58-cert") pod "infra-operator-controller-manager-79d975b745-ndk6f" (UID: "58a2adef-c01f-464e-aa1d-8c2d8a6e5c58") : secret "infra-operator-webhook-server-cert" not found Feb 18 00:49:34 crc kubenswrapper[4858]: E0218 00:49:34.986869 4858 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 00:49:34 crc kubenswrapper[4858]: E0218 00:49:34.986930 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/229552d0-e72e-49af-a4c7-6052e2a7bf5a-cert podName:229552d0-e72e-49af-a4c7-6052e2a7bf5a nodeName:}" failed. No retries permitted until 2026-02-18 00:49:35.486914765 +0000 UTC m=+928.792751497 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/229552d0-e72e-49af-a4c7-6052e2a7bf5a-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn" (UID: "229552d0-e72e-49af-a4c7-6052e2a7bf5a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.987839 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-ghrkx"] Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.988878 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-ghrkx" Feb 18 00:49:34 crc kubenswrapper[4858]: I0218 00:49:34.995686 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-fgnll" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.011425 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk98m\" (UniqueName: \"kubernetes.io/projected/229552d0-e72e-49af-a4c7-6052e2a7bf5a-kube-api-access-qk98m\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn\" (UID: \"229552d0-e72e-49af-a4c7-6052e2a7bf5a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.016574 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-ghrkx"] Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.021245 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qqgpg" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.031933 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzxr7\" (UniqueName: \"kubernetes.io/projected/447c1cfc-d76f-4985-bd95-285a3fbc63cc-kube-api-access-tzxr7\") pod \"placement-operator-controller-manager-8497b45c89-5b4nx\" (UID: \"447c1cfc-d76f-4985-bd95-285a3fbc63cc\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5b4nx" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.032264 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8v5bz" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.038363 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-cqmz8"] Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.044340 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-cqmz8" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.047809 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mhxv\" (UniqueName: \"kubernetes.io/projected/eae2173c-97fd-4d89-8d72-0d44f7c87f9b-kube-api-access-5mhxv\") pod \"swift-operator-controller-manager-68f46476f-xhtjl\" (UID: \"eae2173c-97fd-4d89-8d72-0d44f7c87f9b\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-xhtjl" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.048159 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5b4nx" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.054413 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-wxmnf" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.066011 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-cqmz8"] Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.070952 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dm2f9" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.087676 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5txq\" (UniqueName: \"kubernetes.io/projected/12badb74-0862-49e0-95a9-2e29d4b8dcf7-kube-api-access-v5txq\") pod \"test-operator-controller-manager-7866795846-ghrkx\" (UID: \"12badb74-0862-49e0-95a9-2e29d4b8dcf7\") " pod="openstack-operators/test-operator-controller-manager-7866795846-ghrkx" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.087731 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmjv4\" (UniqueName: \"kubernetes.io/projected/e60cf8fd-9033-4f85-a2a1-16441bd58a56-kube-api-access-hmjv4\") pod \"telemetry-operator-controller-manager-c6f9cb8b-f7txj\" (UID: \"e60cf8fd-9033-4f85-a2a1-16441bd58a56\") " pod="openstack-operators/telemetry-operator-controller-manager-c6f9cb8b-f7txj" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.098622 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-9k4wv" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.107078 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmjv4\" (UniqueName: \"kubernetes.io/projected/e60cf8fd-9033-4f85-a2a1-16441bd58a56-kube-api-access-hmjv4\") pod \"telemetry-operator-controller-manager-c6f9cb8b-f7txj\" (UID: \"e60cf8fd-9033-4f85-a2a1-16441bd58a56\") " pod="openstack-operators/telemetry-operator-controller-manager-c6f9cb8b-f7txj" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.148193 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5"] Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.150182 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.158801 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-xhtjl" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.164703 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-c6f9cb8b-f7txj" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.167872 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.168102 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-4tnz2" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.168136 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.180643 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5"] Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.195185 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5txq\" (UniqueName: \"kubernetes.io/projected/12badb74-0862-49e0-95a9-2e29d4b8dcf7-kube-api-access-v5txq\") pod \"test-operator-controller-manager-7866795846-ghrkx\" (UID: \"12badb74-0862-49e0-95a9-2e29d4b8dcf7\") " pod="openstack-operators/test-operator-controller-manager-7866795846-ghrkx" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.195483 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkl2m\" (UniqueName: \"kubernetes.io/projected/54724d5e-2417-4241-9fd0-36f9e3c72124-kube-api-access-rkl2m\") pod \"watcher-operator-controller-manager-5db88f68c-cqmz8\" (UID: \"54724d5e-2417-4241-9fd0-36f9e3c72124\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-cqmz8" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.203703 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dqvkf"] Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.208015 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dqvkf" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.214834 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-p88wz" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.225750 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5txq\" (UniqueName: \"kubernetes.io/projected/12badb74-0862-49e0-95a9-2e29d4b8dcf7-kube-api-access-v5txq\") pod \"test-operator-controller-manager-7866795846-ghrkx\" (UID: \"12badb74-0862-49e0-95a9-2e29d4b8dcf7\") " pod="openstack-operators/test-operator-controller-manager-7866795846-ghrkx" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.227342 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dqvkf"] Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.288606 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-8hqkm" event={"ID":"e28fd875-635a-43eb-ae2e-2544aa39cc84","Type":"ContainerStarted","Data":"6799b00033e0e38533b962b700ac377994e908d1384bba4fd6f8e3bf51956ddf"} Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.289835 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bkng8" event={"ID":"ca07929f-e6f1-4b35-bcd8-b8a8c2fa6ce6","Type":"ContainerStarted","Data":"2c90b6041518761cefb5e29d42cd6eb653c4ca625469edcd3e9712f03e5a46a1"} Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.294591 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-bkng8"] Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.299241 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrhs8\" (UniqueName: \"kubernetes.io/projected/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-kube-api-access-vrhs8\") pod \"openstack-operator-controller-manager-669759659c-2sgf5\" (UID: \"577edb6b-435b-4d2e-bb6c-3f9c7bac9256\") " pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.299306 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-webhook-certs\") pod \"openstack-operator-controller-manager-669759659c-2sgf5\" (UID: \"577edb6b-435b-4d2e-bb6c-3f9c7bac9256\") " pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.299335 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkl2m\" (UniqueName: \"kubernetes.io/projected/54724d5e-2417-4241-9fd0-36f9e3c72124-kube-api-access-rkl2m\") pod \"watcher-operator-controller-manager-5db88f68c-cqmz8\" (UID: \"54724d5e-2417-4241-9fd0-36f9e3c72124\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-cqmz8" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.299373 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-metrics-certs\") pod \"openstack-operator-controller-manager-669759659c-2sgf5\" (UID: \"577edb6b-435b-4d2e-bb6c-3f9c7bac9256\") " pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.300394 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-8hqkm"] Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.326602 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkl2m\" (UniqueName: \"kubernetes.io/projected/54724d5e-2417-4241-9fd0-36f9e3c72124-kube-api-access-rkl2m\") pod \"watcher-operator-controller-manager-5db88f68c-cqmz8\" (UID: \"54724d5e-2417-4241-9fd0-36f9e3c72124\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-cqmz8" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.401124 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrhs8\" (UniqueName: \"kubernetes.io/projected/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-kube-api-access-vrhs8\") pod \"openstack-operator-controller-manager-669759659c-2sgf5\" (UID: \"577edb6b-435b-4d2e-bb6c-3f9c7bac9256\") " pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.401204 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xljw\" (UniqueName: \"kubernetes.io/projected/b83c91fe-13d0-4711-9f90-3da887fa657d-kube-api-access-7xljw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-dqvkf\" (UID: \"b83c91fe-13d0-4711-9f90-3da887fa657d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dqvkf" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.401228 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-webhook-certs\") pod \"openstack-operator-controller-manager-669759659c-2sgf5\" (UID: \"577edb6b-435b-4d2e-bb6c-3f9c7bac9256\") " pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.401285 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-metrics-certs\") pod \"openstack-operator-controller-manager-669759659c-2sgf5\" (UID: \"577edb6b-435b-4d2e-bb6c-3f9c7bac9256\") " pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" Feb 18 00:49:35 crc kubenswrapper[4858]: E0218 00:49:35.401426 4858 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 00:49:35 crc kubenswrapper[4858]: E0218 00:49:35.401481 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-metrics-certs podName:577edb6b-435b-4d2e-bb6c-3f9c7bac9256 nodeName:}" failed. No retries permitted until 2026-02-18 00:49:35.901465574 +0000 UTC m=+929.207302306 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-metrics-certs") pod "openstack-operator-controller-manager-669759659c-2sgf5" (UID: "577edb6b-435b-4d2e-bb6c-3f9c7bac9256") : secret "metrics-server-cert" not found Feb 18 00:49:35 crc kubenswrapper[4858]: E0218 00:49:35.401724 4858 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 00:49:35 crc kubenswrapper[4858]: E0218 00:49:35.401750 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-webhook-certs podName:577edb6b-435b-4d2e-bb6c-3f9c7bac9256 nodeName:}" failed. No retries permitted until 2026-02-18 00:49:35.90174193 +0000 UTC m=+929.207578662 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-webhook-certs") pod "openstack-operator-controller-manager-669759659c-2sgf5" (UID: "577edb6b-435b-4d2e-bb6c-3f9c7bac9256") : secret "webhook-server-cert" not found Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.436207 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrhs8\" (UniqueName: \"kubernetes.io/projected/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-kube-api-access-vrhs8\") pod \"openstack-operator-controller-manager-669759659c-2sgf5\" (UID: \"577edb6b-435b-4d2e-bb6c-3f9c7bac9256\") " pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.468288 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-lrmvx"] Feb 18 00:49:35 crc kubenswrapper[4858]: W0218 00:49:35.481216 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc33cc4eb_a44e_4b2f_8ea8_1688d831a12a.slice/crio-ae275b7752f37056ea9238ab02cdd8782d8a1649b1e63eaf2ea3b21142a367d6 WatchSource:0}: Error finding container ae275b7752f37056ea9238ab02cdd8782d8a1649b1e63eaf2ea3b21142a367d6: Status 404 returned error can't find the container with id ae275b7752f37056ea9238ab02cdd8782d8a1649b1e63eaf2ea3b21142a367d6 Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.494602 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-ghrkx" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.502189 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xljw\" (UniqueName: \"kubernetes.io/projected/b83c91fe-13d0-4711-9f90-3da887fa657d-kube-api-access-7xljw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-dqvkf\" (UID: \"b83c91fe-13d0-4711-9f90-3da887fa657d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dqvkf" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.502261 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/229552d0-e72e-49af-a4c7-6052e2a7bf5a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn\" (UID: \"229552d0-e72e-49af-a4c7-6052e2a7bf5a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn" Feb 18 00:49:35 crc kubenswrapper[4858]: E0218 00:49:35.502429 4858 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 00:49:35 crc kubenswrapper[4858]: E0218 00:49:35.502480 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/229552d0-e72e-49af-a4c7-6052e2a7bf5a-cert podName:229552d0-e72e-49af-a4c7-6052e2a7bf5a nodeName:}" failed. No retries permitted until 2026-02-18 00:49:36.502465716 +0000 UTC m=+929.808302448 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/229552d0-e72e-49af-a4c7-6052e2a7bf5a-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn" (UID: "229552d0-e72e-49af-a4c7-6052e2a7bf5a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.528250 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xljw\" (UniqueName: \"kubernetes.io/projected/b83c91fe-13d0-4711-9f90-3da887fa657d-kube-api-access-7xljw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-dqvkf\" (UID: \"b83c91fe-13d0-4711-9f90-3da887fa657d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dqvkf" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.535566 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-ksv8b"] Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.535982 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-cqmz8" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.548644 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rlds4"] Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.563850 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dqvkf" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.908394 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-webhook-certs\") pod \"openstack-operator-controller-manager-669759659c-2sgf5\" (UID: \"577edb6b-435b-4d2e-bb6c-3f9c7bac9256\") " pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.908450 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-metrics-certs\") pod \"openstack-operator-controller-manager-669759659c-2sgf5\" (UID: \"577edb6b-435b-4d2e-bb6c-3f9c7bac9256\") " pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" Feb 18 00:49:35 crc kubenswrapper[4858]: E0218 00:49:35.908638 4858 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 00:49:35 crc kubenswrapper[4858]: E0218 00:49:35.908683 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-metrics-certs podName:577edb6b-435b-4d2e-bb6c-3f9c7bac9256 nodeName:}" failed. No retries permitted until 2026-02-18 00:49:36.908669601 +0000 UTC m=+930.214506333 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-metrics-certs") pod "openstack-operator-controller-manager-669759659c-2sgf5" (UID: "577edb6b-435b-4d2e-bb6c-3f9c7bac9256") : secret "metrics-server-cert" not found Feb 18 00:49:35 crc kubenswrapper[4858]: E0218 00:49:35.908976 4858 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 00:49:35 crc kubenswrapper[4858]: E0218 00:49:35.908998 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-webhook-certs podName:577edb6b-435b-4d2e-bb6c-3f9c7bac9256 nodeName:}" failed. No retries permitted until 2026-02-18 00:49:36.908991489 +0000 UTC m=+930.214828221 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-webhook-certs") pod "openstack-operator-controller-manager-669759659c-2sgf5" (UID: "577edb6b-435b-4d2e-bb6c-3f9c7bac9256") : secret "webhook-server-cert" not found Feb 18 00:49:35 crc kubenswrapper[4858]: I0218 00:49:35.957539 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-74zsv"] Feb 18 00:49:35 crc kubenswrapper[4858]: W0218 00:49:35.959183 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9df9a5db_2273_4253_9b76_b67377d8f7f6.slice/crio-53d3dab52848df38e6cd8bec54a1451cfc1b0e1799220c837ca8f62c0429eda9 WatchSource:0}: Error finding container 53d3dab52848df38e6cd8bec54a1451cfc1b0e1799220c837ca8f62c0429eda9: Status 404 returned error can't find the container with id 53d3dab52848df38e6cd8bec54a1451cfc1b0e1799220c837ca8f62c0429eda9 Feb 18 00:49:36 crc kubenswrapper[4858]: I0218 00:49:36.010913 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58a2adef-c01f-464e-aa1d-8c2d8a6e5c58-cert\") pod \"infra-operator-controller-manager-79d975b745-ndk6f\" (UID: \"58a2adef-c01f-464e-aa1d-8c2d8a6e5c58\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-ndk6f" Feb 18 00:49:36 crc kubenswrapper[4858]: E0218 00:49:36.011121 4858 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 00:49:36 crc kubenswrapper[4858]: E0218 00:49:36.011234 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58a2adef-c01f-464e-aa1d-8c2d8a6e5c58-cert podName:58a2adef-c01f-464e-aa1d-8c2d8a6e5c58 nodeName:}" failed. No retries permitted until 2026-02-18 00:49:38.011219212 +0000 UTC m=+931.317055944 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58a2adef-c01f-464e-aa1d-8c2d8a6e5c58-cert") pod "infra-operator-controller-manager-79d975b745-ndk6f" (UID: "58a2adef-c01f-464e-aa1d-8c2d8a6e5c58") : secret "infra-operator-webhook-server-cert" not found Feb 18 00:49:36 crc kubenswrapper[4858]: W0218 00:49:36.288754 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod860622ee_6268_4ff0_a2ae_403ae8b984fc.slice/crio-f58aa3a696cce3dd741dd1ac934347a142a78ab3916b62e60ddb310fb8a6833c WatchSource:0}: Error finding container f58aa3a696cce3dd741dd1ac934347a142a78ab3916b62e60ddb310fb8a6833c: Status 404 returned error can't find the container with id f58aa3a696cce3dd741dd1ac934347a142a78ab3916b62e60ddb310fb8a6833c Feb 18 00:49:36 crc kubenswrapper[4858]: I0218 00:49:36.289321 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-c6f9cb8b-f7txj"] Feb 18 00:49:36 crc kubenswrapper[4858]: I0218 00:49:36.306542 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-ksv8b" event={"ID":"597262ab-929d-4c51-8400-d6a6df47dcbd","Type":"ContainerStarted","Data":"ca0b7844b9d92f6bc8069f7ff04dead48cc4f2c411b03a01d9ba10f8a0632e9f"} Feb 18 00:49:36 crc kubenswrapper[4858]: I0218 00:49:36.311995 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-9k4wv"] Feb 18 00:49:36 crc kubenswrapper[4858]: I0218 00:49:36.312032 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-74zsv" event={"ID":"9df9a5db-2273-4253-9b76-b67377d8f7f6","Type":"ContainerStarted","Data":"53d3dab52848df38e6cd8bec54a1451cfc1b0e1799220c837ca8f62c0429eda9"} Feb 18 00:49:36 crc kubenswrapper[4858]: W0218 00:49:36.317129 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod758bf8e1_fe1b_4c02_8ad8_6d80237e0024.slice/crio-bdb86e8004e7b7b09f5bef1bd0cb57d1b576ca49897fa7f46e763ed43dd4ddf8 WatchSource:0}: Error finding container bdb86e8004e7b7b09f5bef1bd0cb57d1b576ca49897fa7f46e763ed43dd4ddf8: Status 404 returned error can't find the container with id bdb86e8004e7b7b09f5bef1bd0cb57d1b576ca49897fa7f46e763ed43dd4ddf8 Feb 18 00:49:36 crc kubenswrapper[4858]: I0218 00:49:36.326530 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rlds4" event={"ID":"b0ca0509-6112-4163-a060-ea15122be64a","Type":"ContainerStarted","Data":"2802805a8210dae369c9a483e9e412aa815674fb88c489d7824354cbea67d0e4"} Feb 18 00:49:36 crc kubenswrapper[4858]: I0218 00:49:36.333060 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-kvqvz"] Feb 18 00:49:36 crc kubenswrapper[4858]: I0218 00:49:36.337763 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-9k4wv" event={"ID":"860622ee-6268-4ff0-a2ae-403ae8b984fc","Type":"ContainerStarted","Data":"f58aa3a696cce3dd741dd1ac934347a142a78ab3916b62e60ddb310fb8a6833c"} Feb 18 00:49:36 crc kubenswrapper[4858]: I0218 00:49:36.348211 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-5b4nx"] Feb 18 00:49:36 crc kubenswrapper[4858]: I0218 00:49:36.349701 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lrmvx" event={"ID":"c33cc4eb-a44e-4b2f-8ea8-1688d831a12a","Type":"ContainerStarted","Data":"ae275b7752f37056ea9238ab02cdd8782d8a1649b1e63eaf2ea3b21142a367d6"} Feb 18 00:49:36 crc kubenswrapper[4858]: I0218 00:49:36.365578 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-rzzqb"] Feb 18 00:49:36 crc kubenswrapper[4858]: W0218 00:49:36.367610 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28b5bfad_085d_48c6_b15f_c431d57de698.slice/crio-3c1c3ae954f82dfb42fb2f241b044606861d8899f9ee8a706e1cec3534d2c0eb WatchSource:0}: Error finding container 3c1c3ae954f82dfb42fb2f241b044606861d8899f9ee8a706e1cec3534d2c0eb: Status 404 returned error can't find the container with id 3c1c3ae954f82dfb42fb2f241b044606861d8899f9ee8a706e1cec3534d2c0eb Feb 18 00:49:36 crc kubenswrapper[4858]: W0218 00:49:36.374641 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeae2173c_97fd_4d89_8d72_0d44f7c87f9b.slice/crio-6015a15503ce9d82d1a6936f472c8cd0e74dc7bd9dac6f45058c16db00af7b60 WatchSource:0}: Error finding container 6015a15503ce9d82d1a6936f472c8cd0e74dc7bd9dac6f45058c16db00af7b60: Status 404 returned error can't find the container with id 6015a15503ce9d82d1a6936f472c8cd0e74dc7bd9dac6f45058c16db00af7b60 Feb 18 00:49:36 crc kubenswrapper[4858]: W0218 00:49:36.375969 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod447c1cfc_d76f_4985_bd95_285a3fbc63cc.slice/crio-f730d2e23d9f9491349b8a78883365d4a9017b7d633ee2f37789ca690708232c WatchSource:0}: Error finding container f730d2e23d9f9491349b8a78883365d4a9017b7d633ee2f37789ca690708232c: Status 404 returned error can't find the container with id f730d2e23d9f9491349b8a78883365d4a9017b7d633ee2f37789ca690708232c Feb 18 00:49:36 crc kubenswrapper[4858]: W0218 00:49:36.377210 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5dba120_621f_4686_8e83_6f10779d8cfb.slice/crio-e120895ff0230df96311bbd888a4cadb9149dd67556c714c75bdf15fa09528ec WatchSource:0}: Error finding container e120895ff0230df96311bbd888a4cadb9149dd67556c714c75bdf15fa09528ec: Status 404 returned error can't find the container with id e120895ff0230df96311bbd888a4cadb9149dd67556c714c75bdf15fa09528ec Feb 18 00:49:36 crc kubenswrapper[4858]: I0218 00:49:36.380613 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-qxqhh"] Feb 18 00:49:36 crc kubenswrapper[4858]: W0218 00:49:36.387835 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbddd921f_895d_4b1d_8203_2aff8a721ed9.slice/crio-9de369f6b6ad208baa61bc599d29d192aded096d203704ba560931bd61007ceb WatchSource:0}: Error finding container 9de369f6b6ad208baa61bc599d29d192aded096d203704ba560931bd61007ceb: Status 404 returned error can't find the container with id 9de369f6b6ad208baa61bc599d29d192aded096d203704ba560931bd61007ceb Feb 18 00:49:36 crc kubenswrapper[4858]: W0218 00:49:36.388605 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3e44d9b_6d44_4aa9_9100_c2e139131ec9.slice/crio-a381c1ac18681282dcd6fe5e2564970b2f234727fdde09e104ad619b4f472b9a WatchSource:0}: Error finding container a381c1ac18681282dcd6fe5e2564970b2f234727fdde09e104ad619b4f472b9a: Status 404 returned error can't find the container with id a381c1ac18681282dcd6fe5e2564970b2f234727fdde09e104ad619b4f472b9a Feb 18 00:49:36 crc kubenswrapper[4858]: W0218 00:49:36.393471 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12badb74_0862_49e0_95a9_2e29d4b8dcf7.slice/crio-dc978139197d818b005b1aad4604504e1bc20677d4dfb814b71c30c413293915 WatchSource:0}: Error finding container dc978139197d818b005b1aad4604504e1bc20677d4dfb814b71c30c413293915: Status 404 returned error can't find the container with id dc978139197d818b005b1aad4604504e1bc20677d4dfb814b71c30c413293915 Feb 18 00:49:36 crc kubenswrapper[4858]: E0218 00:49:36.394060 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wlxg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-dm2f9_openstack-operators(f3e44d9b-6d44-4aa9-9100-c2e139131ec9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 00:49:36 crc kubenswrapper[4858]: E0218 00:49:36.395953 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dm2f9" podUID="f3e44d9b-6d44-4aa9-9100-c2e139131ec9" Feb 18 00:49:36 crc kubenswrapper[4858]: W0218 00:49:36.397532 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddda54f36_cfc8_468e_8101_f8041735931f.slice/crio-c0da90b79127cd14c10ef649558dba62599f7ef83ada4f5039372ea1f602b460 WatchSource:0}: Error finding container c0da90b79127cd14c10ef649558dba62599f7ef83ada4f5039372ea1f602b460: Status 404 returned error can't find the container with id c0da90b79127cd14c10ef649558dba62599f7ef83ada4f5039372ea1f602b460 Feb 18 00:49:36 crc kubenswrapper[4858]: E0218 00:49:36.403650 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2j2qd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-qqgpg_openstack-operators(dda54f36-cfc8-468e-8101-f8041735931f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 00:49:36 crc kubenswrapper[4858]: E0218 00:49:36.405031 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qqgpg" podUID="dda54f36-cfc8-468e-8101-f8041735931f" Feb 18 00:49:36 crc kubenswrapper[4858]: E0218 00:49:36.405461 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8bdz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-8v5bz_openstack-operators(11bc7389-c53b-4030-892b-43da85d70fe1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 00:49:36 crc kubenswrapper[4858]: W0218 00:49:36.406534 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb83c91fe_13d0_4711_9f90_3da887fa657d.slice/crio-16bc007487167bac578f9c0de69883ce0bc3f090fd6482ad387a6a9b052fb70c WatchSource:0}: Error finding container 16bc007487167bac578f9c0de69883ce0bc3f090fd6482ad387a6a9b052fb70c: Status 404 returned error can't find the container with id 16bc007487167bac578f9c0de69883ce0bc3f090fd6482ad387a6a9b052fb70c Feb 18 00:49:36 crc kubenswrapper[4858]: E0218 00:49:36.406677 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8v5bz" podUID="11bc7389-c53b-4030-892b-43da85d70fe1" Feb 18 00:49:36 crc kubenswrapper[4858]: E0218 00:49:36.406979 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v5txq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-ghrkx_openstack-operators(12badb74-0862-49e0-95a9-2e29d4b8dcf7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 00:49:36 crc kubenswrapper[4858]: E0218 00:49:36.409847 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7xljw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-dqvkf_openstack-operators(b83c91fe-13d0-4711-9f90-3da887fa657d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 00:49:36 crc kubenswrapper[4858]: I0218 00:49:36.410278 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-xwgm9"] Feb 18 00:49:36 crc kubenswrapper[4858]: E0218 00:49:36.410348 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-ghrkx" podUID="12badb74-0862-49e0-95a9-2e29d4b8dcf7" Feb 18 00:49:36 crc kubenswrapper[4858]: E0218 00:49:36.411878 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dqvkf" podUID="b83c91fe-13d0-4711-9f90-3da887fa657d" Feb 18 00:49:36 crc kubenswrapper[4858]: I0218 00:49:36.417957 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-xhtjl"] Feb 18 00:49:36 crc kubenswrapper[4858]: I0218 00:49:36.430699 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qqgpg"] Feb 18 00:49:36 crc kubenswrapper[4858]: I0218 00:49:36.438451 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-8v5bz"] Feb 18 00:49:36 crc kubenswrapper[4858]: I0218 00:49:36.447895 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-ghrkx"] Feb 18 00:49:36 crc kubenswrapper[4858]: I0218 00:49:36.454422 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-cqmz8"] Feb 18 00:49:36 crc kubenswrapper[4858]: I0218 00:49:36.481775 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-dm2f9"] Feb 18 00:49:36 crc kubenswrapper[4858]: I0218 00:49:36.491948 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dqvkf"] Feb 18 00:49:36 crc kubenswrapper[4858]: I0218 00:49:36.519430 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/229552d0-e72e-49af-a4c7-6052e2a7bf5a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn\" (UID: \"229552d0-e72e-49af-a4c7-6052e2a7bf5a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn" Feb 18 00:49:36 crc kubenswrapper[4858]: E0218 00:49:36.520527 4858 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 00:49:36 crc kubenswrapper[4858]: E0218 00:49:36.520603 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/229552d0-e72e-49af-a4c7-6052e2a7bf5a-cert podName:229552d0-e72e-49af-a4c7-6052e2a7bf5a nodeName:}" failed. No retries permitted until 2026-02-18 00:49:38.520565402 +0000 UTC m=+931.826402134 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/229552d0-e72e-49af-a4c7-6052e2a7bf5a-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn" (UID: "229552d0-e72e-49af-a4c7-6052e2a7bf5a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 00:49:36 crc kubenswrapper[4858]: I0218 00:49:36.926808 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-webhook-certs\") pod \"openstack-operator-controller-manager-669759659c-2sgf5\" (UID: \"577edb6b-435b-4d2e-bb6c-3f9c7bac9256\") " pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" Feb 18 00:49:36 crc kubenswrapper[4858]: I0218 00:49:36.927218 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-metrics-certs\") pod \"openstack-operator-controller-manager-669759659c-2sgf5\" (UID: \"577edb6b-435b-4d2e-bb6c-3f9c7bac9256\") " pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" Feb 18 00:49:36 crc kubenswrapper[4858]: E0218 00:49:36.927005 4858 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 00:49:36 crc kubenswrapper[4858]: E0218 00:49:36.927333 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-webhook-certs podName:577edb6b-435b-4d2e-bb6c-3f9c7bac9256 nodeName:}" failed. No retries permitted until 2026-02-18 00:49:38.927314781 +0000 UTC m=+932.233151513 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-webhook-certs") pod "openstack-operator-controller-manager-669759659c-2sgf5" (UID: "577edb6b-435b-4d2e-bb6c-3f9c7bac9256") : secret "webhook-server-cert" not found Feb 18 00:49:36 crc kubenswrapper[4858]: E0218 00:49:36.927412 4858 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 00:49:36 crc kubenswrapper[4858]: E0218 00:49:36.927571 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-metrics-certs podName:577edb6b-435b-4d2e-bb6c-3f9c7bac9256 nodeName:}" failed. No retries permitted until 2026-02-18 00:49:38.927553767 +0000 UTC m=+932.233390499 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-metrics-certs") pod "openstack-operator-controller-manager-669759659c-2sgf5" (UID: "577edb6b-435b-4d2e-bb6c-3f9c7bac9256") : secret "metrics-server-cert" not found Feb 18 00:49:37 crc kubenswrapper[4858]: I0218 00:49:37.365200 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dqvkf" event={"ID":"b83c91fe-13d0-4711-9f90-3da887fa657d","Type":"ContainerStarted","Data":"16bc007487167bac578f9c0de69883ce0bc3f090fd6482ad387a6a9b052fb70c"} Feb 18 00:49:37 crc kubenswrapper[4858]: E0218 00:49:37.370985 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dqvkf" podUID="b83c91fe-13d0-4711-9f90-3da887fa657d" Feb 18 00:49:37 crc kubenswrapper[4858]: I0218 00:49:37.387453 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-rzzqb" event={"ID":"758bf8e1-fe1b-4c02-8ad8-6d80237e0024","Type":"ContainerStarted","Data":"bdb86e8004e7b7b09f5bef1bd0cb57d1b576ca49897fa7f46e763ed43dd4ddf8"} Feb 18 00:49:37 crc kubenswrapper[4858]: I0218 00:49:37.391426 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8v5bz" event={"ID":"11bc7389-c53b-4030-892b-43da85d70fe1","Type":"ContainerStarted","Data":"41d8b1e939271bba420a7ea519b85b7c9b02f3fd78d00fc1d954ef3c19fe4b6c"} Feb 18 00:49:37 crc kubenswrapper[4858]: I0218 00:49:37.392956 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-cqmz8" event={"ID":"54724d5e-2417-4241-9fd0-36f9e3c72124","Type":"ContainerStarted","Data":"aebce0c800026631a339225502c75065ed04d4199f5a08736d34f2c3a8b063f4"} Feb 18 00:49:37 crc kubenswrapper[4858]: E0218 00:49:37.393432 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8v5bz" podUID="11bc7389-c53b-4030-892b-43da85d70fe1" Feb 18 00:49:37 crc kubenswrapper[4858]: I0218 00:49:37.395629 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-ghrkx" event={"ID":"12badb74-0862-49e0-95a9-2e29d4b8dcf7","Type":"ContainerStarted","Data":"dc978139197d818b005b1aad4604504e1bc20677d4dfb814b71c30c413293915"} Feb 18 00:49:37 crc kubenswrapper[4858]: E0218 00:49:37.397265 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-ghrkx" podUID="12badb74-0862-49e0-95a9-2e29d4b8dcf7" Feb 18 00:49:37 crc kubenswrapper[4858]: I0218 00:49:37.397682 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5b4nx" event={"ID":"447c1cfc-d76f-4985-bd95-285a3fbc63cc","Type":"ContainerStarted","Data":"f730d2e23d9f9491349b8a78883365d4a9017b7d633ee2f37789ca690708232c"} Feb 18 00:49:37 crc kubenswrapper[4858]: I0218 00:49:37.400423 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-xhtjl" event={"ID":"eae2173c-97fd-4d89-8d72-0d44f7c87f9b","Type":"ContainerStarted","Data":"6015a15503ce9d82d1a6936f472c8cd0e74dc7bd9dac6f45058c16db00af7b60"} Feb 18 00:49:37 crc kubenswrapper[4858]: I0218 00:49:37.407504 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-c6f9cb8b-f7txj" event={"ID":"e60cf8fd-9033-4f85-a2a1-16441bd58a56","Type":"ContainerStarted","Data":"8f316ee5bda22806c1a95c0ac7b7db147cca0713d35e14d75b0af58ff4ebea7e"} Feb 18 00:49:37 crc kubenswrapper[4858]: I0218 00:49:37.411712 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dm2f9" event={"ID":"f3e44d9b-6d44-4aa9-9100-c2e139131ec9","Type":"ContainerStarted","Data":"a381c1ac18681282dcd6fe5e2564970b2f234727fdde09e104ad619b4f472b9a"} Feb 18 00:49:37 crc kubenswrapper[4858]: E0218 00:49:37.413521 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dm2f9" podUID="f3e44d9b-6d44-4aa9-9100-c2e139131ec9" Feb 18 00:49:37 crc kubenswrapper[4858]: I0218 00:49:37.416408 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xwgm9" event={"ID":"28b5bfad-085d-48c6-b15f-c431d57de698","Type":"ContainerStarted","Data":"3c1c3ae954f82dfb42fb2f241b044606861d8899f9ee8a706e1cec3534d2c0eb"} Feb 18 00:49:37 crc kubenswrapper[4858]: E0218 00:49:37.438864 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qqgpg" podUID="dda54f36-cfc8-468e-8101-f8041735931f" Feb 18 00:49:37 crc kubenswrapper[4858]: I0218 00:49:37.465578 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qqgpg" event={"ID":"dda54f36-cfc8-468e-8101-f8041735931f","Type":"ContainerStarted","Data":"c0da90b79127cd14c10ef649558dba62599f7ef83ada4f5039372ea1f602b460"} Feb 18 00:49:37 crc kubenswrapper[4858]: I0218 00:49:37.465618 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-kvqvz" event={"ID":"bddd921f-895d-4b1d-8203-2aff8a721ed9","Type":"ContainerStarted","Data":"9de369f6b6ad208baa61bc599d29d192aded096d203704ba560931bd61007ceb"} Feb 18 00:49:37 crc kubenswrapper[4858]: I0218 00:49:37.465628 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-qxqhh" event={"ID":"f5dba120-621f-4686-8e83-6f10779d8cfb","Type":"ContainerStarted","Data":"e120895ff0230df96311bbd888a4cadb9149dd67556c714c75bdf15fa09528ec"} Feb 18 00:49:38 crc kubenswrapper[4858]: I0218 00:49:38.052834 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58a2adef-c01f-464e-aa1d-8c2d8a6e5c58-cert\") pod \"infra-operator-controller-manager-79d975b745-ndk6f\" (UID: \"58a2adef-c01f-464e-aa1d-8c2d8a6e5c58\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-ndk6f" Feb 18 00:49:38 crc kubenswrapper[4858]: E0218 00:49:38.052974 4858 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 00:49:38 crc kubenswrapper[4858]: E0218 00:49:38.053043 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58a2adef-c01f-464e-aa1d-8c2d8a6e5c58-cert podName:58a2adef-c01f-464e-aa1d-8c2d8a6e5c58 nodeName:}" failed. No retries permitted until 2026-02-18 00:49:42.053012711 +0000 UTC m=+935.358849433 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58a2adef-c01f-464e-aa1d-8c2d8a6e5c58-cert") pod "infra-operator-controller-manager-79d975b745-ndk6f" (UID: "58a2adef-c01f-464e-aa1d-8c2d8a6e5c58") : secret "infra-operator-webhook-server-cert" not found Feb 18 00:49:38 crc kubenswrapper[4858]: E0218 00:49:38.443454 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dm2f9" podUID="f3e44d9b-6d44-4aa9-9100-c2e139131ec9" Feb 18 00:49:38 crc kubenswrapper[4858]: E0218 00:49:38.443467 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dqvkf" podUID="b83c91fe-13d0-4711-9f90-3da887fa657d" Feb 18 00:49:38 crc kubenswrapper[4858]: E0218 00:49:38.443454 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qqgpg" podUID="dda54f36-cfc8-468e-8101-f8041735931f" Feb 18 00:49:38 crc kubenswrapper[4858]: E0218 00:49:38.443670 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8v5bz" podUID="11bc7389-c53b-4030-892b-43da85d70fe1" Feb 18 00:49:38 crc kubenswrapper[4858]: E0218 00:49:38.443747 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-ghrkx" podUID="12badb74-0862-49e0-95a9-2e29d4b8dcf7" Feb 18 00:49:38 crc kubenswrapper[4858]: I0218 00:49:38.561055 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/229552d0-e72e-49af-a4c7-6052e2a7bf5a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn\" (UID: \"229552d0-e72e-49af-a4c7-6052e2a7bf5a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn" Feb 18 00:49:38 crc kubenswrapper[4858]: E0218 00:49:38.561618 4858 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 00:49:38 crc kubenswrapper[4858]: E0218 00:49:38.561664 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/229552d0-e72e-49af-a4c7-6052e2a7bf5a-cert podName:229552d0-e72e-49af-a4c7-6052e2a7bf5a nodeName:}" failed. No retries permitted until 2026-02-18 00:49:42.561649754 +0000 UTC m=+935.867486486 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/229552d0-e72e-49af-a4c7-6052e2a7bf5a-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn" (UID: "229552d0-e72e-49af-a4c7-6052e2a7bf5a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 00:49:38 crc kubenswrapper[4858]: I0218 00:49:38.965453 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-metrics-certs\") pod \"openstack-operator-controller-manager-669759659c-2sgf5\" (UID: \"577edb6b-435b-4d2e-bb6c-3f9c7bac9256\") " pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" Feb 18 00:49:38 crc kubenswrapper[4858]: E0218 00:49:38.965626 4858 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 00:49:38 crc kubenswrapper[4858]: I0218 00:49:38.966005 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-webhook-certs\") pod \"openstack-operator-controller-manager-669759659c-2sgf5\" (UID: \"577edb6b-435b-4d2e-bb6c-3f9c7bac9256\") " pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" Feb 18 00:49:38 crc kubenswrapper[4858]: E0218 00:49:38.966101 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-metrics-certs podName:577edb6b-435b-4d2e-bb6c-3f9c7bac9256 nodeName:}" failed. No retries permitted until 2026-02-18 00:49:42.966081136 +0000 UTC m=+936.271917868 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-metrics-certs") pod "openstack-operator-controller-manager-669759659c-2sgf5" (UID: "577edb6b-435b-4d2e-bb6c-3f9c7bac9256") : secret "metrics-server-cert" not found Feb 18 00:49:38 crc kubenswrapper[4858]: E0218 00:49:38.966199 4858 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 00:49:38 crc kubenswrapper[4858]: E0218 00:49:38.966291 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-webhook-certs podName:577edb6b-435b-4d2e-bb6c-3f9c7bac9256 nodeName:}" failed. No retries permitted until 2026-02-18 00:49:42.96627174 +0000 UTC m=+936.272108472 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-webhook-certs") pod "openstack-operator-controller-manager-669759659c-2sgf5" (UID: "577edb6b-435b-4d2e-bb6c-3f9c7bac9256") : secret "webhook-server-cert" not found Feb 18 00:49:39 crc kubenswrapper[4858]: E0218 00:49:39.457144 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qqgpg" podUID="dda54f36-cfc8-468e-8101-f8041735931f" Feb 18 00:49:42 crc kubenswrapper[4858]: I0218 00:49:42.123766 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58a2adef-c01f-464e-aa1d-8c2d8a6e5c58-cert\") pod \"infra-operator-controller-manager-79d975b745-ndk6f\" (UID: \"58a2adef-c01f-464e-aa1d-8c2d8a6e5c58\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-ndk6f" Feb 18 00:49:42 crc kubenswrapper[4858]: E0218 00:49:42.124040 4858 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 00:49:42 crc kubenswrapper[4858]: E0218 00:49:42.125358 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58a2adef-c01f-464e-aa1d-8c2d8a6e5c58-cert podName:58a2adef-c01f-464e-aa1d-8c2d8a6e5c58 nodeName:}" failed. No retries permitted until 2026-02-18 00:49:50.125328103 +0000 UTC m=+943.431164875 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58a2adef-c01f-464e-aa1d-8c2d8a6e5c58-cert") pod "infra-operator-controller-manager-79d975b745-ndk6f" (UID: "58a2adef-c01f-464e-aa1d-8c2d8a6e5c58") : secret "infra-operator-webhook-server-cert" not found Feb 18 00:49:42 crc kubenswrapper[4858]: I0218 00:49:42.587005 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vt29r"] Feb 18 00:49:42 crc kubenswrapper[4858]: I0218 00:49:42.588316 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vt29r" Feb 18 00:49:42 crc kubenswrapper[4858]: I0218 00:49:42.602650 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vt29r"] Feb 18 00:49:42 crc kubenswrapper[4858]: I0218 00:49:42.649873 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/229552d0-e72e-49af-a4c7-6052e2a7bf5a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn\" (UID: \"229552d0-e72e-49af-a4c7-6052e2a7bf5a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn" Feb 18 00:49:42 crc kubenswrapper[4858]: E0218 00:49:42.650079 4858 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 00:49:42 crc kubenswrapper[4858]: E0218 00:49:42.650146 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/229552d0-e72e-49af-a4c7-6052e2a7bf5a-cert podName:229552d0-e72e-49af-a4c7-6052e2a7bf5a nodeName:}" failed. No retries permitted until 2026-02-18 00:49:50.65012791 +0000 UTC m=+943.955964642 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/229552d0-e72e-49af-a4c7-6052e2a7bf5a-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn" (UID: "229552d0-e72e-49af-a4c7-6052e2a7bf5a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 00:49:42 crc kubenswrapper[4858]: I0218 00:49:42.751626 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79qxp\" (UniqueName: \"kubernetes.io/projected/5bac6607-84f1-4287-b432-fbcc2247c032-kube-api-access-79qxp\") pod \"certified-operators-vt29r\" (UID: \"5bac6607-84f1-4287-b432-fbcc2247c032\") " pod="openshift-marketplace/certified-operators-vt29r" Feb 18 00:49:42 crc kubenswrapper[4858]: I0218 00:49:42.751679 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bac6607-84f1-4287-b432-fbcc2247c032-catalog-content\") pod \"certified-operators-vt29r\" (UID: \"5bac6607-84f1-4287-b432-fbcc2247c032\") " pod="openshift-marketplace/certified-operators-vt29r" Feb 18 00:49:42 crc kubenswrapper[4858]: I0218 00:49:42.751730 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bac6607-84f1-4287-b432-fbcc2247c032-utilities\") pod \"certified-operators-vt29r\" (UID: \"5bac6607-84f1-4287-b432-fbcc2247c032\") " pod="openshift-marketplace/certified-operators-vt29r" Feb 18 00:49:42 crc kubenswrapper[4858]: I0218 00:49:42.853547 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79qxp\" (UniqueName: \"kubernetes.io/projected/5bac6607-84f1-4287-b432-fbcc2247c032-kube-api-access-79qxp\") pod \"certified-operators-vt29r\" (UID: \"5bac6607-84f1-4287-b432-fbcc2247c032\") " pod="openshift-marketplace/certified-operators-vt29r" Feb 18 00:49:42 crc kubenswrapper[4858]: I0218 00:49:42.853605 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bac6607-84f1-4287-b432-fbcc2247c032-catalog-content\") pod \"certified-operators-vt29r\" (UID: \"5bac6607-84f1-4287-b432-fbcc2247c032\") " pod="openshift-marketplace/certified-operators-vt29r" Feb 18 00:49:42 crc kubenswrapper[4858]: I0218 00:49:42.853630 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bac6607-84f1-4287-b432-fbcc2247c032-utilities\") pod \"certified-operators-vt29r\" (UID: \"5bac6607-84f1-4287-b432-fbcc2247c032\") " pod="openshift-marketplace/certified-operators-vt29r" Feb 18 00:49:42 crc kubenswrapper[4858]: I0218 00:49:42.854121 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bac6607-84f1-4287-b432-fbcc2247c032-catalog-content\") pod \"certified-operators-vt29r\" (UID: \"5bac6607-84f1-4287-b432-fbcc2247c032\") " pod="openshift-marketplace/certified-operators-vt29r" Feb 18 00:49:42 crc kubenswrapper[4858]: I0218 00:49:42.854148 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bac6607-84f1-4287-b432-fbcc2247c032-utilities\") pod \"certified-operators-vt29r\" (UID: \"5bac6607-84f1-4287-b432-fbcc2247c032\") " pod="openshift-marketplace/certified-operators-vt29r" Feb 18 00:49:42 crc kubenswrapper[4858]: I0218 00:49:42.874085 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79qxp\" (UniqueName: \"kubernetes.io/projected/5bac6607-84f1-4287-b432-fbcc2247c032-kube-api-access-79qxp\") pod \"certified-operators-vt29r\" (UID: \"5bac6607-84f1-4287-b432-fbcc2247c032\") " pod="openshift-marketplace/certified-operators-vt29r" Feb 18 00:49:42 crc kubenswrapper[4858]: I0218 00:49:42.911922 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vt29r" Feb 18 00:49:43 crc kubenswrapper[4858]: I0218 00:49:43.055860 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-webhook-certs\") pod \"openstack-operator-controller-manager-669759659c-2sgf5\" (UID: \"577edb6b-435b-4d2e-bb6c-3f9c7bac9256\") " pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" Feb 18 00:49:43 crc kubenswrapper[4858]: I0218 00:49:43.055930 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-metrics-certs\") pod \"openstack-operator-controller-manager-669759659c-2sgf5\" (UID: \"577edb6b-435b-4d2e-bb6c-3f9c7bac9256\") " pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" Feb 18 00:49:43 crc kubenswrapper[4858]: E0218 00:49:43.056013 4858 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 00:49:43 crc kubenswrapper[4858]: E0218 00:49:43.056077 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-webhook-certs podName:577edb6b-435b-4d2e-bb6c-3f9c7bac9256 nodeName:}" failed. No retries permitted until 2026-02-18 00:49:51.056062879 +0000 UTC m=+944.361899611 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-webhook-certs") pod "openstack-operator-controller-manager-669759659c-2sgf5" (UID: "577edb6b-435b-4d2e-bb6c-3f9c7bac9256") : secret "webhook-server-cert" not found Feb 18 00:49:43 crc kubenswrapper[4858]: E0218 00:49:43.056150 4858 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 00:49:43 crc kubenswrapper[4858]: E0218 00:49:43.056234 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-metrics-certs podName:577edb6b-435b-4d2e-bb6c-3f9c7bac9256 nodeName:}" failed. No retries permitted until 2026-02-18 00:49:51.056214922 +0000 UTC m=+944.362051654 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-metrics-certs") pod "openstack-operator-controller-manager-669759659c-2sgf5" (UID: "577edb6b-435b-4d2e-bb6c-3f9c7bac9256") : secret "metrics-server-cert" not found Feb 18 00:49:49 crc kubenswrapper[4858]: E0218 00:49:49.380553 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.74:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1" Feb 18 00:49:49 crc kubenswrapper[4858]: E0218 00:49:49.381105 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.74:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1" Feb 18 00:49:49 crc kubenswrapper[4858]: E0218 00:49:49.381290 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.74:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hmjv4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-c6f9cb8b-f7txj_openstack-operators(e60cf8fd-9033-4f85-a2a1-16441bd58a56): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 00:49:49 crc kubenswrapper[4858]: E0218 00:49:49.382467 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-c6f9cb8b-f7txj" podUID="e60cf8fd-9033-4f85-a2a1-16441bd58a56" Feb 18 00:49:49 crc kubenswrapper[4858]: E0218 00:49:49.560843 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.74:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-c6f9cb8b-f7txj" podUID="e60cf8fd-9033-4f85-a2a1-16441bd58a56" Feb 18 00:49:49 crc kubenswrapper[4858]: I0218 00:49:49.767934 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vt29r"] Feb 18 00:49:49 crc kubenswrapper[4858]: W0218 00:49:49.802601 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bac6607_84f1_4287_b432_fbcc2247c032.slice/crio-d04c6dc98e4c9d187a4eeb5b5ac8806295c84d37cb6231689dd1144e7adf4dd6 WatchSource:0}: Error finding container d04c6dc98e4c9d187a4eeb5b5ac8806295c84d37cb6231689dd1144e7adf4dd6: Status 404 returned error can't find the container with id d04c6dc98e4c9d187a4eeb5b5ac8806295c84d37cb6231689dd1144e7adf4dd6 Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.187744 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58a2adef-c01f-464e-aa1d-8c2d8a6e5c58-cert\") pod \"infra-operator-controller-manager-79d975b745-ndk6f\" (UID: \"58a2adef-c01f-464e-aa1d-8c2d8a6e5c58\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-ndk6f" Feb 18 00:49:50 crc kubenswrapper[4858]: E0218 00:49:50.187925 4858 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 00:49:50 crc kubenswrapper[4858]: E0218 00:49:50.188004 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58a2adef-c01f-464e-aa1d-8c2d8a6e5c58-cert podName:58a2adef-c01f-464e-aa1d-8c2d8a6e5c58 nodeName:}" failed. No retries permitted until 2026-02-18 00:50:06.187986019 +0000 UTC m=+959.493822751 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58a2adef-c01f-464e-aa1d-8c2d8a6e5c58-cert") pod "infra-operator-controller-manager-79d975b745-ndk6f" (UID: "58a2adef-c01f-464e-aa1d-8c2d8a6e5c58") : secret "infra-operator-webhook-server-cert" not found Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.602987 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-qxqhh" event={"ID":"f5dba120-621f-4686-8e83-6f10779d8cfb","Type":"ContainerStarted","Data":"822f7730f72b0284f5086735195a877799cc4cc0828de7e98be9d2f85db32829"} Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.603889 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-qxqhh" Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.605248 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-kvqvz" event={"ID":"bddd921f-895d-4b1d-8203-2aff8a721ed9","Type":"ContainerStarted","Data":"b2182c9178951d708dba78b94dc78811320c94e4ef4d2994c6394fe37d7c5414"} Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.605635 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-kvqvz" Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.613153 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-74zsv" event={"ID":"9df9a5db-2273-4253-9b76-b67377d8f7f6","Type":"ContainerStarted","Data":"c4b4b62d1c521249f6a505c6c78bda0d4804bb843f6a27f7e743c317201e60b6"} Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.613577 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-74zsv" Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.625678 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-rzzqb" event={"ID":"758bf8e1-fe1b-4c02-8ad8-6d80237e0024","Type":"ContainerStarted","Data":"7658ce3cd17cb592cf6c892513b6e29e64103d02c2c53ca3e44cd39c460f7fab"} Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.626411 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-rzzqb" Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.637699 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rlds4" event={"ID":"b0ca0509-6112-4163-a060-ea15122be64a","Type":"ContainerStarted","Data":"8862ef23e21ce845e4b3c8f3a6daf9ff2c537af1af33f169085ed0b5ea311ea6"} Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.638314 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rlds4" Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.639965 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-cqmz8" event={"ID":"54724d5e-2417-4241-9fd0-36f9e3c72124","Type":"ContainerStarted","Data":"e667512435f9863702a9530e3f8be29610debc08477c7eeab983c59470eade59"} Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.640330 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-cqmz8" Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.641842 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bkng8" event={"ID":"ca07929f-e6f1-4b35-bcd8-b8a8c2fa6ce6","Type":"ContainerStarted","Data":"5d5ae04d678728e9eaad22cf447ada3a81ac6bd9fc485ee0565d572e851cdeec"} Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.642463 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bkng8" Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.643144 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-xhtjl" event={"ID":"eae2173c-97fd-4d89-8d72-0d44f7c87f9b","Type":"ContainerStarted","Data":"8b253f17abd5b72e7b1936e82e61751be70745fb2ff1da0f4da4b94aa8d3acb7"} Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.643482 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-xhtjl" Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.646363 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-ksv8b" event={"ID":"597262ab-929d-4c51-8400-d6a6df47dcbd","Type":"ContainerStarted","Data":"ae249e48dbb3ddbfe86f63b554c5272bfa65648ccc6f496f350e07b728574436"} Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.646782 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-ksv8b" Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.648744 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-9k4wv" event={"ID":"860622ee-6268-4ff0-a2ae-403ae8b984fc","Type":"ContainerStarted","Data":"4bcff478d73c494ac0f46d83eca5d1e9dac9ee20562d5dfae8c758fec41a3efc"} Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.649067 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-9k4wv" Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.656360 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xwgm9" event={"ID":"28b5bfad-085d-48c6-b15f-c431d57de698","Type":"ContainerStarted","Data":"65a2021f8a48903b6ca360536efe27df2acbb796969d47e1d9b251729efa8c8a"} Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.656821 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xwgm9" Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.671478 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lrmvx" event={"ID":"c33cc4eb-a44e-4b2f-8ea8-1688d831a12a","Type":"ContainerStarted","Data":"f641aea939be0209b15776fff62e7c94a9c54a5a725e81ccf934aebce4632ca7"} Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.672127 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lrmvx" Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.685505 4858 generic.go:334] "Generic (PLEG): container finished" podID="5bac6607-84f1-4287-b432-fbcc2247c032" containerID="b8f58f1d4ff78a5dd5866ee57ac4f5f4db729f0fc259e04ab300ab8e798c785c" exitCode=0 Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.686144 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vt29r" event={"ID":"5bac6607-84f1-4287-b432-fbcc2247c032","Type":"ContainerDied","Data":"b8f58f1d4ff78a5dd5866ee57ac4f5f4db729f0fc259e04ab300ab8e798c785c"} Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.686169 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vt29r" event={"ID":"5bac6607-84f1-4287-b432-fbcc2247c032","Type":"ContainerStarted","Data":"d04c6dc98e4c9d187a4eeb5b5ac8806295c84d37cb6231689dd1144e7adf4dd6"} Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.695153 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/229552d0-e72e-49af-a4c7-6052e2a7bf5a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn\" (UID: \"229552d0-e72e-49af-a4c7-6052e2a7bf5a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn" Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.701794 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-8hqkm" event={"ID":"e28fd875-635a-43eb-ae2e-2544aa39cc84","Type":"ContainerStarted","Data":"9342f0d310583c90282fd92325bdf1087e3c1700ca4f6c816fc981f149777d0d"} Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.702395 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-8hqkm" Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.705106 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/229552d0-e72e-49af-a4c7-6052e2a7bf5a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn\" (UID: \"229552d0-e72e-49af-a4c7-6052e2a7bf5a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn" Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.723043 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn" Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.728714 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5b4nx" event={"ID":"447c1cfc-d76f-4985-bd95-285a3fbc63cc","Type":"ContainerStarted","Data":"ed8767ce77498820571391b19ef1d3a094ef39d2a1dfaa421d5d4db639aeb785"} Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.729214 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5b4nx" Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.778514 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-qxqhh" podStartSLOduration=3.691495105 podStartE2EDuration="16.778486227s" podCreationTimestamp="2026-02-18 00:49:34 +0000 UTC" firstStartedPulling="2026-02-18 00:49:36.382688571 +0000 UTC m=+929.688525303" lastFinishedPulling="2026-02-18 00:49:49.469679663 +0000 UTC m=+942.775516425" observedRunningTime="2026-02-18 00:49:50.690692757 +0000 UTC m=+943.996529489" watchObservedRunningTime="2026-02-18 00:49:50.778486227 +0000 UTC m=+944.084322959" Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.848043 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-74zsv" podStartSLOduration=3.344468903 podStartE2EDuration="16.848028293s" podCreationTimestamp="2026-02-18 00:49:34 +0000 UTC" firstStartedPulling="2026-02-18 00:49:35.96273633 +0000 UTC m=+929.268573062" lastFinishedPulling="2026-02-18 00:49:49.4662957 +0000 UTC m=+942.772132452" observedRunningTime="2026-02-18 00:49:50.786243887 +0000 UTC m=+944.092080619" watchObservedRunningTime="2026-02-18 00:49:50.848028293 +0000 UTC m=+944.153865025" Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.914724 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rlds4" podStartSLOduration=3.123369402 podStartE2EDuration="16.91470742s" podCreationTimestamp="2026-02-18 00:49:34 +0000 UTC" firstStartedPulling="2026-02-18 00:49:35.672872152 +0000 UTC m=+928.978708914" lastFinishedPulling="2026-02-18 00:49:49.46421017 +0000 UTC m=+942.770046932" observedRunningTime="2026-02-18 00:49:50.848723401 +0000 UTC m=+944.154560133" watchObservedRunningTime="2026-02-18 00:49:50.91470742 +0000 UTC m=+944.220544152" Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.943907 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bkng8" podStartSLOduration=2.981384501 podStartE2EDuration="16.943892662s" podCreationTimestamp="2026-02-18 00:49:34 +0000 UTC" firstStartedPulling="2026-02-18 00:49:35.276843005 +0000 UTC m=+928.582679737" lastFinishedPulling="2026-02-18 00:49:49.239351166 +0000 UTC m=+942.545187898" observedRunningTime="2026-02-18 00:49:50.943150023 +0000 UTC m=+944.248986755" watchObservedRunningTime="2026-02-18 00:49:50.943892662 +0000 UTC m=+944.249729394" Feb 18 00:49:50 crc kubenswrapper[4858]: I0218 00:49:50.944794 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-xhtjl" podStartSLOduration=3.853703572 podStartE2EDuration="16.944789683s" podCreationTimestamp="2026-02-18 00:49:34 +0000 UTC" firstStartedPulling="2026-02-18 00:49:36.378250273 +0000 UTC m=+929.684087005" lastFinishedPulling="2026-02-18 00:49:49.469334334 +0000 UTC m=+942.775173116" observedRunningTime="2026-02-18 00:49:50.911673626 +0000 UTC m=+944.217510358" watchObservedRunningTime="2026-02-18 00:49:50.944789683 +0000 UTC m=+944.250626415" Feb 18 00:49:51 crc kubenswrapper[4858]: I0218 00:49:51.007462 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-rzzqb" podStartSLOduration=3.867872335 podStartE2EDuration="17.007448691s" podCreationTimestamp="2026-02-18 00:49:34 +0000 UTC" firstStartedPulling="2026-02-18 00:49:36.325888155 +0000 UTC m=+929.631724887" lastFinishedPulling="2026-02-18 00:49:49.465464491 +0000 UTC m=+942.771301243" observedRunningTime="2026-02-18 00:49:51.003445553 +0000 UTC m=+944.309282295" watchObservedRunningTime="2026-02-18 00:49:51.007448691 +0000 UTC m=+944.313285423" Feb 18 00:49:51 crc kubenswrapper[4858]: I0218 00:49:51.060803 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-9k4wv" podStartSLOduration=3.903038923 podStartE2EDuration="17.060785261s" podCreationTimestamp="2026-02-18 00:49:34 +0000 UTC" firstStartedPulling="2026-02-18 00:49:36.306269927 +0000 UTC m=+929.612106649" lastFinishedPulling="2026-02-18 00:49:49.464016245 +0000 UTC m=+942.769852987" observedRunningTime="2026-02-18 00:49:51.038009496 +0000 UTC m=+944.343846228" watchObservedRunningTime="2026-02-18 00:49:51.060785261 +0000 UTC m=+944.366621993" Feb 18 00:49:51 crc kubenswrapper[4858]: I0218 00:49:51.096439 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lrmvx" podStartSLOduration=3.161948953 podStartE2EDuration="17.096421871s" podCreationTimestamp="2026-02-18 00:49:34 +0000 UTC" firstStartedPulling="2026-02-18 00:49:35.486477487 +0000 UTC m=+928.792314219" lastFinishedPulling="2026-02-18 00:49:49.420950365 +0000 UTC m=+942.726787137" observedRunningTime="2026-02-18 00:49:51.090896636 +0000 UTC m=+944.396733368" watchObservedRunningTime="2026-02-18 00:49:51.096421871 +0000 UTC m=+944.402258603" Feb 18 00:49:51 crc kubenswrapper[4858]: I0218 00:49:51.105217 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-webhook-certs\") pod \"openstack-operator-controller-manager-669759659c-2sgf5\" (UID: \"577edb6b-435b-4d2e-bb6c-3f9c7bac9256\") " pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" Feb 18 00:49:51 crc kubenswrapper[4858]: I0218 00:49:51.105268 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-metrics-certs\") pod \"openstack-operator-controller-manager-669759659c-2sgf5\" (UID: \"577edb6b-435b-4d2e-bb6c-3f9c7bac9256\") " pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" Feb 18 00:49:51 crc kubenswrapper[4858]: E0218 00:49:51.105407 4858 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 00:49:51 crc kubenswrapper[4858]: E0218 00:49:51.105452 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-metrics-certs podName:577edb6b-435b-4d2e-bb6c-3f9c7bac9256 nodeName:}" failed. No retries permitted until 2026-02-18 00:50:07.105439381 +0000 UTC m=+960.411276113 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-metrics-certs") pod "openstack-operator-controller-manager-669759659c-2sgf5" (UID: "577edb6b-435b-4d2e-bb6c-3f9c7bac9256") : secret "metrics-server-cert" not found Feb 18 00:49:51 crc kubenswrapper[4858]: E0218 00:49:51.106102 4858 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 00:49:51 crc kubenswrapper[4858]: E0218 00:49:51.106135 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-webhook-certs podName:577edb6b-435b-4d2e-bb6c-3f9c7bac9256 nodeName:}" failed. No retries permitted until 2026-02-18 00:50:07.106127647 +0000 UTC m=+960.411964369 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-webhook-certs") pod "openstack-operator-controller-manager-669759659c-2sgf5" (UID: "577edb6b-435b-4d2e-bb6c-3f9c7bac9256") : secret "webhook-server-cert" not found Feb 18 00:49:51 crc kubenswrapper[4858]: I0218 00:49:51.126248 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xwgm9" podStartSLOduration=4.040367403 podStartE2EDuration="17.126232618s" podCreationTimestamp="2026-02-18 00:49:34 +0000 UTC" firstStartedPulling="2026-02-18 00:49:36.378165041 +0000 UTC m=+929.684001773" lastFinishedPulling="2026-02-18 00:49:49.464030236 +0000 UTC m=+942.769866988" observedRunningTime="2026-02-18 00:49:51.124221518 +0000 UTC m=+944.430058250" watchObservedRunningTime="2026-02-18 00:49:51.126232618 +0000 UTC m=+944.432069350" Feb 18 00:49:51 crc kubenswrapper[4858]: I0218 00:49:51.147305 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-cqmz8" podStartSLOduration=4.044137214 podStartE2EDuration="17.147286771s" podCreationTimestamp="2026-02-18 00:49:34 +0000 UTC" firstStartedPulling="2026-02-18 00:49:36.387635811 +0000 UTC m=+929.693472543" lastFinishedPulling="2026-02-18 00:49:49.490785368 +0000 UTC m=+942.796622100" observedRunningTime="2026-02-18 00:49:51.144245657 +0000 UTC m=+944.450082389" watchObservedRunningTime="2026-02-18 00:49:51.147286771 +0000 UTC m=+944.453123503" Feb 18 00:49:51 crc kubenswrapper[4858]: I0218 00:49:51.172923 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-ksv8b" podStartSLOduration=3.609665739 podStartE2EDuration="17.172906175s" podCreationTimestamp="2026-02-18 00:49:34 +0000 UTC" firstStartedPulling="2026-02-18 00:49:35.676147131 +0000 UTC m=+928.981983903" lastFinishedPulling="2026-02-18 00:49:49.239387597 +0000 UTC m=+942.545224339" observedRunningTime="2026-02-18 00:49:51.171503481 +0000 UTC m=+944.477340213" watchObservedRunningTime="2026-02-18 00:49:51.172906175 +0000 UTC m=+944.478742907" Feb 18 00:49:51 crc kubenswrapper[4858]: I0218 00:49:51.197346 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-kvqvz" podStartSLOduration=4.096646085 podStartE2EDuration="17.197330572s" podCreationTimestamp="2026-02-18 00:49:34 +0000 UTC" firstStartedPulling="2026-02-18 00:49:36.389893196 +0000 UTC m=+929.695729928" lastFinishedPulling="2026-02-18 00:49:49.490577643 +0000 UTC m=+942.796414415" observedRunningTime="2026-02-18 00:49:51.196815188 +0000 UTC m=+944.502651910" watchObservedRunningTime="2026-02-18 00:49:51.197330572 +0000 UTC m=+944.503167304" Feb 18 00:49:51 crc kubenswrapper[4858]: I0218 00:49:51.235339 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-8hqkm" podStartSLOduration=3.067201972 podStartE2EDuration="17.235321938s" podCreationTimestamp="2026-02-18 00:49:34 +0000 UTC" firstStartedPulling="2026-02-18 00:49:35.251604259 +0000 UTC m=+928.557440991" lastFinishedPulling="2026-02-18 00:49:49.419724215 +0000 UTC m=+942.725560957" observedRunningTime="2026-02-18 00:49:51.233314518 +0000 UTC m=+944.539151250" watchObservedRunningTime="2026-02-18 00:49:51.235321938 +0000 UTC m=+944.541158670" Feb 18 00:49:51 crc kubenswrapper[4858]: I0218 00:49:51.237695 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5b4nx" podStartSLOduration=4.153364977 podStartE2EDuration="17.237690425s" podCreationTimestamp="2026-02-18 00:49:34 +0000 UTC" firstStartedPulling="2026-02-18 00:49:36.382933126 +0000 UTC m=+929.688769888" lastFinishedPulling="2026-02-18 00:49:49.467258604 +0000 UTC m=+942.773095336" observedRunningTime="2026-02-18 00:49:51.216067008 +0000 UTC m=+944.521903740" watchObservedRunningTime="2026-02-18 00:49:51.237690425 +0000 UTC m=+944.543527157" Feb 18 00:49:51 crc kubenswrapper[4858]: I0218 00:49:51.546975 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn"] Feb 18 00:49:51 crc kubenswrapper[4858]: I0218 00:49:51.754781 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn" event={"ID":"229552d0-e72e-49af-a4c7-6052e2a7bf5a","Type":"ContainerStarted","Data":"07fd53af44e57d393bff983fe0f859620aa1b9bc778e81c5a739ec336641a264"} Feb 18 00:49:52 crc kubenswrapper[4858]: I0218 00:49:52.767734 4858 generic.go:334] "Generic (PLEG): container finished" podID="5bac6607-84f1-4287-b432-fbcc2247c032" containerID="df60658fc871b5dba0fdf360abb9c05cbc7120177b3bbf4526abac82b7c90ca4" exitCode=0 Feb 18 00:49:52 crc kubenswrapper[4858]: I0218 00:49:52.767911 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vt29r" event={"ID":"5bac6607-84f1-4287-b432-fbcc2247c032","Type":"ContainerDied","Data":"df60658fc871b5dba0fdf360abb9c05cbc7120177b3bbf4526abac82b7c90ca4"} Feb 18 00:49:54 crc kubenswrapper[4858]: I0218 00:49:54.510879 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-8hqkm" Feb 18 00:49:54 crc kubenswrapper[4858]: I0218 00:49:54.529230 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-lrmvx" Feb 18 00:49:54 crc kubenswrapper[4858]: I0218 00:49:54.576418 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bkng8" Feb 18 00:49:54 crc kubenswrapper[4858]: I0218 00:49:54.590317 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-ksv8b" Feb 18 00:49:54 crc kubenswrapper[4858]: I0218 00:49:54.630087 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rlds4" Feb 18 00:49:54 crc kubenswrapper[4858]: I0218 00:49:54.766852 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-rzzqb" Feb 18 00:49:54 crc kubenswrapper[4858]: I0218 00:49:54.909315 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-kvqvz" Feb 18 00:49:55 crc kubenswrapper[4858]: I0218 00:49:55.051030 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5b4nx" Feb 18 00:49:55 crc kubenswrapper[4858]: I0218 00:49:55.101935 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-9k4wv" Feb 18 00:49:55 crc kubenswrapper[4858]: I0218 00:49:55.162245 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-xhtjl" Feb 18 00:49:55 crc kubenswrapper[4858]: I0218 00:49:55.265201 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:49:55 crc kubenswrapper[4858]: I0218 00:49:55.265270 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:49:55 crc kubenswrapper[4858]: I0218 00:49:55.265322 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 00:49:55 crc kubenswrapper[4858]: I0218 00:49:55.266033 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4232da30acbadfd7cea82898dfbb989b7b4cf3f5d440e264df04ddfd7051cdf7"} pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:49:55 crc kubenswrapper[4858]: I0218 00:49:55.266110 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" containerID="cri-o://4232da30acbadfd7cea82898dfbb989b7b4cf3f5d440e264df04ddfd7051cdf7" gracePeriod=600 Feb 18 00:49:55 crc kubenswrapper[4858]: I0218 00:49:55.539368 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-cqmz8" Feb 18 00:49:57 crc kubenswrapper[4858]: I0218 00:49:57.826189 4858 generic.go:334] "Generic (PLEG): container finished" podID="7172df49-6116-4968-a2b5-a1afb116568b" containerID="4232da30acbadfd7cea82898dfbb989b7b4cf3f5d440e264df04ddfd7051cdf7" exitCode=0 Feb 18 00:49:57 crc kubenswrapper[4858]: I0218 00:49:57.826268 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerDied","Data":"4232da30acbadfd7cea82898dfbb989b7b4cf3f5d440e264df04ddfd7051cdf7"} Feb 18 00:49:57 crc kubenswrapper[4858]: I0218 00:49:57.826598 4858 scope.go:117] "RemoveContainer" containerID="010ce5804c87e4080adda85e7efd663cb5c56ffcff1160985ac0b6c58a57396c" Feb 18 00:50:04 crc kubenswrapper[4858]: E0218 00:50:04.520754 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" Feb 18 00:50:04 crc kubenswrapper[4858]: E0218 00:50:04.521983 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2j2qd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-qqgpg_openstack-operators(dda54f36-cfc8-468e-8101-f8041735931f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 00:50:04 crc kubenswrapper[4858]: E0218 00:50:04.524321 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qqgpg" podUID="dda54f36-cfc8-468e-8101-f8041735931f" Feb 18 00:50:04 crc kubenswrapper[4858]: I0218 00:50:04.613315 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-74zsv" Feb 18 00:50:04 crc kubenswrapper[4858]: I0218 00:50:04.800218 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-qxqhh" Feb 18 00:50:04 crc kubenswrapper[4858]: I0218 00:50:04.928358 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-xwgm9" Feb 18 00:50:06 crc kubenswrapper[4858]: E0218 00:50:06.231750 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 18 00:50:06 crc kubenswrapper[4858]: E0218 00:50:06.232137 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8bdz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-8v5bz_openstack-operators(11bc7389-c53b-4030-892b-43da85d70fe1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 00:50:06 crc kubenswrapper[4858]: E0218 00:50:06.233348 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8v5bz" podUID="11bc7389-c53b-4030-892b-43da85d70fe1" Feb 18 00:50:06 crc kubenswrapper[4858]: I0218 00:50:06.264099 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58a2adef-c01f-464e-aa1d-8c2d8a6e5c58-cert\") pod \"infra-operator-controller-manager-79d975b745-ndk6f\" (UID: \"58a2adef-c01f-464e-aa1d-8c2d8a6e5c58\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-ndk6f" Feb 18 00:50:06 crc kubenswrapper[4858]: I0218 00:50:06.286288 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58a2adef-c01f-464e-aa1d-8c2d8a6e5c58-cert\") pod \"infra-operator-controller-manager-79d975b745-ndk6f\" (UID: \"58a2adef-c01f-464e-aa1d-8c2d8a6e5c58\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-ndk6f" Feb 18 00:50:06 crc kubenswrapper[4858]: I0218 00:50:06.500132 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-ndk6f" Feb 18 00:50:06 crc kubenswrapper[4858]: E0218 00:50:06.812338 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24" Feb 18 00:50:06 crc kubenswrapper[4858]: E0218 00:50:06.812886 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qk98m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn_openstack-operators(229552d0-e72e-49af-a4c7-6052e2a7bf5a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 00:50:06 crc kubenswrapper[4858]: E0218 00:50:06.814404 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn" podUID="229552d0-e72e-49af-a4c7-6052e2a7bf5a" Feb 18 00:50:06 crc kubenswrapper[4858]: E0218 00:50:06.909376 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn" podUID="229552d0-e72e-49af-a4c7-6052e2a7bf5a" Feb 18 00:50:07 crc kubenswrapper[4858]: I0218 00:50:07.184361 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-webhook-certs\") pod \"openstack-operator-controller-manager-669759659c-2sgf5\" (UID: \"577edb6b-435b-4d2e-bb6c-3f9c7bac9256\") " pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" Feb 18 00:50:07 crc kubenswrapper[4858]: I0218 00:50:07.184597 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-metrics-certs\") pod \"openstack-operator-controller-manager-669759659c-2sgf5\" (UID: \"577edb6b-435b-4d2e-bb6c-3f9c7bac9256\") " pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" Feb 18 00:50:07 crc kubenswrapper[4858]: I0218 00:50:07.192024 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-webhook-certs\") pod \"openstack-operator-controller-manager-669759659c-2sgf5\" (UID: \"577edb6b-435b-4d2e-bb6c-3f9c7bac9256\") " pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" Feb 18 00:50:07 crc kubenswrapper[4858]: I0218 00:50:07.192922 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/577edb6b-435b-4d2e-bb6c-3f9c7bac9256-metrics-certs\") pod \"openstack-operator-controller-manager-669759659c-2sgf5\" (UID: \"577edb6b-435b-4d2e-bb6c-3f9c7bac9256\") " pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" Feb 18 00:50:07 crc kubenswrapper[4858]: E0218 00:50:07.276276 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 18 00:50:07 crc kubenswrapper[4858]: E0218 00:50:07.277102 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7xljw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-dqvkf_openstack-operators(b83c91fe-13d0-4711-9f90-3da887fa657d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 00:50:07 crc kubenswrapper[4858]: E0218 00:50:07.278248 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dqvkf" podUID="b83c91fe-13d0-4711-9f90-3da887fa657d" Feb 18 00:50:07 crc kubenswrapper[4858]: I0218 00:50:07.349638 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-4tnz2" Feb 18 00:50:07 crc kubenswrapper[4858]: I0218 00:50:07.358322 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" Feb 18 00:50:07 crc kubenswrapper[4858]: I0218 00:50:07.827404 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-ndk6f"] Feb 18 00:50:07 crc kubenswrapper[4858]: W0218 00:50:07.835636 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58a2adef_c01f_464e_aa1d_8c2d8a6e5c58.slice/crio-7f181fa8f54f3023ce4e534a7786b6a85477c9d46e258551356432d276b09807 WatchSource:0}: Error finding container 7f181fa8f54f3023ce4e534a7786b6a85477c9d46e258551356432d276b09807: Status 404 returned error can't find the container with id 7f181fa8f54f3023ce4e534a7786b6a85477c9d46e258551356432d276b09807 Feb 18 00:50:07 crc kubenswrapper[4858]: I0218 00:50:07.917735 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerStarted","Data":"4583245cbf90bbb57e10dd728d6513f324cbca372738a19b656a2071d981e8c4"} Feb 18 00:50:07 crc kubenswrapper[4858]: I0218 00:50:07.920345 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-ghrkx" event={"ID":"12badb74-0862-49e0-95a9-2e29d4b8dcf7","Type":"ContainerStarted","Data":"51c3d73029c55b3fd36695b06093be465665579bf2cd17a428d0fac96e207684"} Feb 18 00:50:07 crc kubenswrapper[4858]: I0218 00:50:07.920782 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-ghrkx" Feb 18 00:50:07 crc kubenswrapper[4858]: I0218 00:50:07.922044 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-ndk6f" event={"ID":"58a2adef-c01f-464e-aa1d-8c2d8a6e5c58","Type":"ContainerStarted","Data":"7f181fa8f54f3023ce4e534a7786b6a85477c9d46e258551356432d276b09807"} Feb 18 00:50:07 crc kubenswrapper[4858]: I0218 00:50:07.924898 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vt29r" event={"ID":"5bac6607-84f1-4287-b432-fbcc2247c032","Type":"ContainerStarted","Data":"b65d9b56a9e83fba3cf70540d5be4f2b3247723f66675922a17741c967e5c5ea"} Feb 18 00:50:07 crc kubenswrapper[4858]: I0218 00:50:07.926679 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-c6f9cb8b-f7txj" event={"ID":"e60cf8fd-9033-4f85-a2a1-16441bd58a56","Type":"ContainerStarted","Data":"7b98c7b10e891462c0e69f16cd74ab605316a9cc7070036926b4eae52c823705"} Feb 18 00:50:07 crc kubenswrapper[4858]: I0218 00:50:07.927019 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-c6f9cb8b-f7txj" Feb 18 00:50:07 crc kubenswrapper[4858]: I0218 00:50:07.928223 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dm2f9" event={"ID":"f3e44d9b-6d44-4aa9-9100-c2e139131ec9","Type":"ContainerStarted","Data":"ac5fa9ed13ff5b256fc24a15d92ffaa3a6f16db86237f62e71c3b305ec532392"} Feb 18 00:50:07 crc kubenswrapper[4858]: I0218 00:50:07.928565 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dm2f9" Feb 18 00:50:07 crc kubenswrapper[4858]: I0218 00:50:07.940262 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5"] Feb 18 00:50:07 crc kubenswrapper[4858]: W0218 00:50:07.940779 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod577edb6b_435b_4d2e_bb6c_3f9c7bac9256.slice/crio-7f96e3d66070c4fbcd245037daef3a7f8e8415ee58e139793fe4d716cbddddc8 WatchSource:0}: Error finding container 7f96e3d66070c4fbcd245037daef3a7f8e8415ee58e139793fe4d716cbddddc8: Status 404 returned error can't find the container with id 7f96e3d66070c4fbcd245037daef3a7f8e8415ee58e139793fe4d716cbddddc8 Feb 18 00:50:07 crc kubenswrapper[4858]: I0218 00:50:07.959624 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dm2f9" podStartSLOduration=3.055506656 podStartE2EDuration="33.959609524s" podCreationTimestamp="2026-02-18 00:49:34 +0000 UTC" firstStartedPulling="2026-02-18 00:49:36.393934305 +0000 UTC m=+929.699771037" lastFinishedPulling="2026-02-18 00:50:07.298037153 +0000 UTC m=+960.603873905" observedRunningTime="2026-02-18 00:50:07.957847612 +0000 UTC m=+961.263684344" watchObservedRunningTime="2026-02-18 00:50:07.959609524 +0000 UTC m=+961.265446256" Feb 18 00:50:07 crc kubenswrapper[4858]: I0218 00:50:07.981610 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vt29r" podStartSLOduration=9.356575475 podStartE2EDuration="25.981590551s" podCreationTimestamp="2026-02-18 00:49:42 +0000 UTC" firstStartedPulling="2026-02-18 00:49:50.687463388 +0000 UTC m=+943.993300120" lastFinishedPulling="2026-02-18 00:50:07.312478454 +0000 UTC m=+960.618315196" observedRunningTime="2026-02-18 00:50:07.980125855 +0000 UTC m=+961.285962587" watchObservedRunningTime="2026-02-18 00:50:07.981590551 +0000 UTC m=+961.287427453" Feb 18 00:50:07 crc kubenswrapper[4858]: I0218 00:50:07.998336 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-ghrkx" podStartSLOduration=3.151785015 podStartE2EDuration="33.998319699s" podCreationTimestamp="2026-02-18 00:49:34 +0000 UTC" firstStartedPulling="2026-02-18 00:49:36.40643855 +0000 UTC m=+929.712275282" lastFinishedPulling="2026-02-18 00:50:07.252973224 +0000 UTC m=+960.558809966" observedRunningTime="2026-02-18 00:50:07.995278625 +0000 UTC m=+961.301115367" watchObservedRunningTime="2026-02-18 00:50:07.998319699 +0000 UTC m=+961.304156431" Feb 18 00:50:08 crc kubenswrapper[4858]: I0218 00:50:08.020435 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-c6f9cb8b-f7txj" podStartSLOduration=3.057912176 podStartE2EDuration="34.020418228s" podCreationTimestamp="2026-02-18 00:49:34 +0000 UTC" firstStartedPulling="2026-02-18 00:49:36.353267053 +0000 UTC m=+929.659103785" lastFinishedPulling="2026-02-18 00:50:07.315773085 +0000 UTC m=+960.621609837" observedRunningTime="2026-02-18 00:50:08.011602393 +0000 UTC m=+961.317439125" watchObservedRunningTime="2026-02-18 00:50:08.020418228 +0000 UTC m=+961.326254960" Feb 18 00:50:08 crc kubenswrapper[4858]: I0218 00:50:08.946321 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" event={"ID":"577edb6b-435b-4d2e-bb6c-3f9c7bac9256","Type":"ContainerStarted","Data":"af0d49ea4f526fccd152fb740a390e37728d6b36a0eb58cbdc7d6255c84d4629"} Feb 18 00:50:08 crc kubenswrapper[4858]: I0218 00:50:08.953070 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" Feb 18 00:50:08 crc kubenswrapper[4858]: I0218 00:50:08.953093 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" event={"ID":"577edb6b-435b-4d2e-bb6c-3f9c7bac9256","Type":"ContainerStarted","Data":"7f96e3d66070c4fbcd245037daef3a7f8e8415ee58e139793fe4d716cbddddc8"} Feb 18 00:50:08 crc kubenswrapper[4858]: I0218 00:50:08.982294 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" podStartSLOduration=34.982273692 podStartE2EDuration="34.982273692s" podCreationTimestamp="2026-02-18 00:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:50:08.970741861 +0000 UTC m=+962.276578593" watchObservedRunningTime="2026-02-18 00:50:08.982273692 +0000 UTC m=+962.288110424" Feb 18 00:50:10 crc kubenswrapper[4858]: I0218 00:50:10.964364 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-ndk6f" event={"ID":"58a2adef-c01f-464e-aa1d-8c2d8a6e5c58","Type":"ContainerStarted","Data":"eebabe8409edf7aad8e4dd550d01cf8e781c15a63e444f7e0d62da639d119517"} Feb 18 00:50:11 crc kubenswrapper[4858]: I0218 00:50:11.003984 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-ndk6f" podStartSLOduration=34.942076941 podStartE2EDuration="37.00395682s" podCreationTimestamp="2026-02-18 00:49:34 +0000 UTC" firstStartedPulling="2026-02-18 00:50:07.837162689 +0000 UTC m=+961.142999421" lastFinishedPulling="2026-02-18 00:50:09.899042538 +0000 UTC m=+963.204879300" observedRunningTime="2026-02-18 00:50:10.993062405 +0000 UTC m=+964.298899177" watchObservedRunningTime="2026-02-18 00:50:11.00395682 +0000 UTC m=+964.309793592" Feb 18 00:50:11 crc kubenswrapper[4858]: I0218 00:50:11.974626 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-ndk6f" Feb 18 00:50:12 crc kubenswrapper[4858]: I0218 00:50:12.823477 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jshsv"] Feb 18 00:50:12 crc kubenswrapper[4858]: I0218 00:50:12.826050 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jshsv" Feb 18 00:50:12 crc kubenswrapper[4858]: I0218 00:50:12.847189 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jshsv"] Feb 18 00:50:12 crc kubenswrapper[4858]: I0218 00:50:12.912411 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vt29r" Feb 18 00:50:12 crc kubenswrapper[4858]: I0218 00:50:12.912789 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vt29r" Feb 18 00:50:12 crc kubenswrapper[4858]: I0218 00:50:12.968901 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chcdd\" (UniqueName: \"kubernetes.io/projected/bdf005d6-d7be-493d-a062-3227e3d3b096-kube-api-access-chcdd\") pod \"redhat-marketplace-jshsv\" (UID: \"bdf005d6-d7be-493d-a062-3227e3d3b096\") " pod="openshift-marketplace/redhat-marketplace-jshsv" Feb 18 00:50:12 crc kubenswrapper[4858]: I0218 00:50:12.968953 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdf005d6-d7be-493d-a062-3227e3d3b096-utilities\") pod \"redhat-marketplace-jshsv\" (UID: \"bdf005d6-d7be-493d-a062-3227e3d3b096\") " pod="openshift-marketplace/redhat-marketplace-jshsv" Feb 18 00:50:12 crc kubenswrapper[4858]: I0218 00:50:12.969049 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdf005d6-d7be-493d-a062-3227e3d3b096-catalog-content\") pod \"redhat-marketplace-jshsv\" (UID: \"bdf005d6-d7be-493d-a062-3227e3d3b096\") " pod="openshift-marketplace/redhat-marketplace-jshsv" Feb 18 00:50:12 crc kubenswrapper[4858]: I0218 00:50:12.988985 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vt29r" Feb 18 00:50:13 crc kubenswrapper[4858]: I0218 00:50:13.041290 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vt29r" Feb 18 00:50:13 crc kubenswrapper[4858]: I0218 00:50:13.071121 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chcdd\" (UniqueName: \"kubernetes.io/projected/bdf005d6-d7be-493d-a062-3227e3d3b096-kube-api-access-chcdd\") pod \"redhat-marketplace-jshsv\" (UID: \"bdf005d6-d7be-493d-a062-3227e3d3b096\") " pod="openshift-marketplace/redhat-marketplace-jshsv" Feb 18 00:50:13 crc kubenswrapper[4858]: I0218 00:50:13.071488 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdf005d6-d7be-493d-a062-3227e3d3b096-utilities\") pod \"redhat-marketplace-jshsv\" (UID: \"bdf005d6-d7be-493d-a062-3227e3d3b096\") " pod="openshift-marketplace/redhat-marketplace-jshsv" Feb 18 00:50:13 crc kubenswrapper[4858]: I0218 00:50:13.071790 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdf005d6-d7be-493d-a062-3227e3d3b096-catalog-content\") pod \"redhat-marketplace-jshsv\" (UID: \"bdf005d6-d7be-493d-a062-3227e3d3b096\") " pod="openshift-marketplace/redhat-marketplace-jshsv" Feb 18 00:50:13 crc kubenswrapper[4858]: I0218 00:50:13.071995 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdf005d6-d7be-493d-a062-3227e3d3b096-utilities\") pod \"redhat-marketplace-jshsv\" (UID: \"bdf005d6-d7be-493d-a062-3227e3d3b096\") " pod="openshift-marketplace/redhat-marketplace-jshsv" Feb 18 00:50:13 crc kubenswrapper[4858]: I0218 00:50:13.072294 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdf005d6-d7be-493d-a062-3227e3d3b096-catalog-content\") pod \"redhat-marketplace-jshsv\" (UID: \"bdf005d6-d7be-493d-a062-3227e3d3b096\") " pod="openshift-marketplace/redhat-marketplace-jshsv" Feb 18 00:50:13 crc kubenswrapper[4858]: I0218 00:50:13.102429 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chcdd\" (UniqueName: \"kubernetes.io/projected/bdf005d6-d7be-493d-a062-3227e3d3b096-kube-api-access-chcdd\") pod \"redhat-marketplace-jshsv\" (UID: \"bdf005d6-d7be-493d-a062-3227e3d3b096\") " pod="openshift-marketplace/redhat-marketplace-jshsv" Feb 18 00:50:13 crc kubenswrapper[4858]: I0218 00:50:13.160687 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jshsv" Feb 18 00:50:13 crc kubenswrapper[4858]: I0218 00:50:13.432229 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jshsv"] Feb 18 00:50:13 crc kubenswrapper[4858]: W0218 00:50:13.434820 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdf005d6_d7be_493d_a062_3227e3d3b096.slice/crio-90c370c5e499c0d525e5bf5e9206730586959d8ccead6a05dbf6ca6be95a1aab WatchSource:0}: Error finding container 90c370c5e499c0d525e5bf5e9206730586959d8ccead6a05dbf6ca6be95a1aab: Status 404 returned error can't find the container with id 90c370c5e499c0d525e5bf5e9206730586959d8ccead6a05dbf6ca6be95a1aab Feb 18 00:50:13 crc kubenswrapper[4858]: I0218 00:50:13.988842 4858 generic.go:334] "Generic (PLEG): container finished" podID="bdf005d6-d7be-493d-a062-3227e3d3b096" containerID="19a79b575b86482f3f86c924c175034fce11b5709f660577f83eec857c4fcacd" exitCode=0 Feb 18 00:50:13 crc kubenswrapper[4858]: I0218 00:50:13.988929 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jshsv" event={"ID":"bdf005d6-d7be-493d-a062-3227e3d3b096","Type":"ContainerDied","Data":"19a79b575b86482f3f86c924c175034fce11b5709f660577f83eec857c4fcacd"} Feb 18 00:50:13 crc kubenswrapper[4858]: I0218 00:50:13.989154 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jshsv" event={"ID":"bdf005d6-d7be-493d-a062-3227e3d3b096","Type":"ContainerStarted","Data":"90c370c5e499c0d525e5bf5e9206730586959d8ccead6a05dbf6ca6be95a1aab"} Feb 18 00:50:15 crc kubenswrapper[4858]: I0218 00:50:15.001382 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jshsv" event={"ID":"bdf005d6-d7be-493d-a062-3227e3d3b096","Type":"ContainerStarted","Data":"7b8017acdcf2ffe7993f149f08a34d8f6aa97677bb04921a4516c32f587738f6"} Feb 18 00:50:15 crc kubenswrapper[4858]: I0218 00:50:15.074879 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-dm2f9" Feb 18 00:50:15 crc kubenswrapper[4858]: I0218 00:50:15.168044 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-c6f9cb8b-f7txj" Feb 18 00:50:15 crc kubenswrapper[4858]: I0218 00:50:15.385443 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vt29r"] Feb 18 00:50:15 crc kubenswrapper[4858]: I0218 00:50:15.385790 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vt29r" podUID="5bac6607-84f1-4287-b432-fbcc2247c032" containerName="registry-server" containerID="cri-o://b65d9b56a9e83fba3cf70540d5be4f2b3247723f66675922a17741c967e5c5ea" gracePeriod=2 Feb 18 00:50:15 crc kubenswrapper[4858]: I0218 00:50:15.498578 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-ghrkx" Feb 18 00:50:15 crc kubenswrapper[4858]: I0218 00:50:15.828400 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vt29r" Feb 18 00:50:15 crc kubenswrapper[4858]: I0218 00:50:15.918086 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bac6607-84f1-4287-b432-fbcc2247c032-catalog-content\") pod \"5bac6607-84f1-4287-b432-fbcc2247c032\" (UID: \"5bac6607-84f1-4287-b432-fbcc2247c032\") " Feb 18 00:50:15 crc kubenswrapper[4858]: I0218 00:50:15.918281 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79qxp\" (UniqueName: \"kubernetes.io/projected/5bac6607-84f1-4287-b432-fbcc2247c032-kube-api-access-79qxp\") pod \"5bac6607-84f1-4287-b432-fbcc2247c032\" (UID: \"5bac6607-84f1-4287-b432-fbcc2247c032\") " Feb 18 00:50:15 crc kubenswrapper[4858]: I0218 00:50:15.918367 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bac6607-84f1-4287-b432-fbcc2247c032-utilities\") pod \"5bac6607-84f1-4287-b432-fbcc2247c032\" (UID: \"5bac6607-84f1-4287-b432-fbcc2247c032\") " Feb 18 00:50:15 crc kubenswrapper[4858]: I0218 00:50:15.919788 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bac6607-84f1-4287-b432-fbcc2247c032-utilities" (OuterVolumeSpecName: "utilities") pod "5bac6607-84f1-4287-b432-fbcc2247c032" (UID: "5bac6607-84f1-4287-b432-fbcc2247c032"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:50:15 crc kubenswrapper[4858]: I0218 00:50:15.927478 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bac6607-84f1-4287-b432-fbcc2247c032-kube-api-access-79qxp" (OuterVolumeSpecName: "kube-api-access-79qxp") pod "5bac6607-84f1-4287-b432-fbcc2247c032" (UID: "5bac6607-84f1-4287-b432-fbcc2247c032"). InnerVolumeSpecName "kube-api-access-79qxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:50:16 crc kubenswrapper[4858]: I0218 00:50:16.011671 4858 generic.go:334] "Generic (PLEG): container finished" podID="5bac6607-84f1-4287-b432-fbcc2247c032" containerID="b65d9b56a9e83fba3cf70540d5be4f2b3247723f66675922a17741c967e5c5ea" exitCode=0 Feb 18 00:50:16 crc kubenswrapper[4858]: I0218 00:50:16.011746 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vt29r" event={"ID":"5bac6607-84f1-4287-b432-fbcc2247c032","Type":"ContainerDied","Data":"b65d9b56a9e83fba3cf70540d5be4f2b3247723f66675922a17741c967e5c5ea"} Feb 18 00:50:16 crc kubenswrapper[4858]: I0218 00:50:16.011784 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vt29r" event={"ID":"5bac6607-84f1-4287-b432-fbcc2247c032","Type":"ContainerDied","Data":"d04c6dc98e4c9d187a4eeb5b5ac8806295c84d37cb6231689dd1144e7adf4dd6"} Feb 18 00:50:16 crc kubenswrapper[4858]: I0218 00:50:16.011831 4858 scope.go:117] "RemoveContainer" containerID="b65d9b56a9e83fba3cf70540d5be4f2b3247723f66675922a17741c967e5c5ea" Feb 18 00:50:16 crc kubenswrapper[4858]: I0218 00:50:16.011998 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vt29r" Feb 18 00:50:16 crc kubenswrapper[4858]: I0218 00:50:16.015992 4858 generic.go:334] "Generic (PLEG): container finished" podID="bdf005d6-d7be-493d-a062-3227e3d3b096" containerID="7b8017acdcf2ffe7993f149f08a34d8f6aa97677bb04921a4516c32f587738f6" exitCode=0 Feb 18 00:50:16 crc kubenswrapper[4858]: I0218 00:50:16.016016 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jshsv" event={"ID":"bdf005d6-d7be-493d-a062-3227e3d3b096","Type":"ContainerDied","Data":"7b8017acdcf2ffe7993f149f08a34d8f6aa97677bb04921a4516c32f587738f6"} Feb 18 00:50:16 crc kubenswrapper[4858]: I0218 00:50:16.019746 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bac6607-84f1-4287-b432-fbcc2247c032-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5bac6607-84f1-4287-b432-fbcc2247c032" (UID: "5bac6607-84f1-4287-b432-fbcc2247c032"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:50:16 crc kubenswrapper[4858]: I0218 00:50:16.019847 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79qxp\" (UniqueName: \"kubernetes.io/projected/5bac6607-84f1-4287-b432-fbcc2247c032-kube-api-access-79qxp\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:16 crc kubenswrapper[4858]: I0218 00:50:16.019914 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bac6607-84f1-4287-b432-fbcc2247c032-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:16 crc kubenswrapper[4858]: I0218 00:50:16.054392 4858 scope.go:117] "RemoveContainer" containerID="df60658fc871b5dba0fdf360abb9c05cbc7120177b3bbf4526abac82b7c90ca4" Feb 18 00:50:16 crc kubenswrapper[4858]: I0218 00:50:16.077407 4858 scope.go:117] "RemoveContainer" containerID="b8f58f1d4ff78a5dd5866ee57ac4f5f4db729f0fc259e04ab300ab8e798c785c" Feb 18 00:50:16 crc kubenswrapper[4858]: I0218 00:50:16.116958 4858 scope.go:117] "RemoveContainer" containerID="b65d9b56a9e83fba3cf70540d5be4f2b3247723f66675922a17741c967e5c5ea" Feb 18 00:50:16 crc kubenswrapper[4858]: E0218 00:50:16.117419 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b65d9b56a9e83fba3cf70540d5be4f2b3247723f66675922a17741c967e5c5ea\": container with ID starting with b65d9b56a9e83fba3cf70540d5be4f2b3247723f66675922a17741c967e5c5ea not found: ID does not exist" containerID="b65d9b56a9e83fba3cf70540d5be4f2b3247723f66675922a17741c967e5c5ea" Feb 18 00:50:16 crc kubenswrapper[4858]: I0218 00:50:16.117452 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b65d9b56a9e83fba3cf70540d5be4f2b3247723f66675922a17741c967e5c5ea"} err="failed to get container status \"b65d9b56a9e83fba3cf70540d5be4f2b3247723f66675922a17741c967e5c5ea\": rpc error: code = NotFound desc = could not find container \"b65d9b56a9e83fba3cf70540d5be4f2b3247723f66675922a17741c967e5c5ea\": container with ID starting with b65d9b56a9e83fba3cf70540d5be4f2b3247723f66675922a17741c967e5c5ea not found: ID does not exist" Feb 18 00:50:16 crc kubenswrapper[4858]: I0218 00:50:16.117475 4858 scope.go:117] "RemoveContainer" containerID="df60658fc871b5dba0fdf360abb9c05cbc7120177b3bbf4526abac82b7c90ca4" Feb 18 00:50:16 crc kubenswrapper[4858]: E0218 00:50:16.118261 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df60658fc871b5dba0fdf360abb9c05cbc7120177b3bbf4526abac82b7c90ca4\": container with ID starting with df60658fc871b5dba0fdf360abb9c05cbc7120177b3bbf4526abac82b7c90ca4 not found: ID does not exist" containerID="df60658fc871b5dba0fdf360abb9c05cbc7120177b3bbf4526abac82b7c90ca4" Feb 18 00:50:16 crc kubenswrapper[4858]: I0218 00:50:16.118282 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df60658fc871b5dba0fdf360abb9c05cbc7120177b3bbf4526abac82b7c90ca4"} err="failed to get container status \"df60658fc871b5dba0fdf360abb9c05cbc7120177b3bbf4526abac82b7c90ca4\": rpc error: code = NotFound desc = could not find container \"df60658fc871b5dba0fdf360abb9c05cbc7120177b3bbf4526abac82b7c90ca4\": container with ID starting with df60658fc871b5dba0fdf360abb9c05cbc7120177b3bbf4526abac82b7c90ca4 not found: ID does not exist" Feb 18 00:50:16 crc kubenswrapper[4858]: I0218 00:50:16.118300 4858 scope.go:117] "RemoveContainer" containerID="b8f58f1d4ff78a5dd5866ee57ac4f5f4db729f0fc259e04ab300ab8e798c785c" Feb 18 00:50:16 crc kubenswrapper[4858]: E0218 00:50:16.118573 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8f58f1d4ff78a5dd5866ee57ac4f5f4db729f0fc259e04ab300ab8e798c785c\": container with ID starting with b8f58f1d4ff78a5dd5866ee57ac4f5f4db729f0fc259e04ab300ab8e798c785c not found: ID does not exist" containerID="b8f58f1d4ff78a5dd5866ee57ac4f5f4db729f0fc259e04ab300ab8e798c785c" Feb 18 00:50:16 crc kubenswrapper[4858]: I0218 00:50:16.118592 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8f58f1d4ff78a5dd5866ee57ac4f5f4db729f0fc259e04ab300ab8e798c785c"} err="failed to get container status \"b8f58f1d4ff78a5dd5866ee57ac4f5f4db729f0fc259e04ab300ab8e798c785c\": rpc error: code = NotFound desc = could not find container \"b8f58f1d4ff78a5dd5866ee57ac4f5f4db729f0fc259e04ab300ab8e798c785c\": container with ID starting with b8f58f1d4ff78a5dd5866ee57ac4f5f4db729f0fc259e04ab300ab8e798c785c not found: ID does not exist" Feb 18 00:50:16 crc kubenswrapper[4858]: I0218 00:50:16.123022 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bac6607-84f1-4287-b432-fbcc2247c032-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:16 crc kubenswrapper[4858]: I0218 00:50:16.340370 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vt29r"] Feb 18 00:50:16 crc kubenswrapper[4858]: I0218 00:50:16.347023 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vt29r"] Feb 18 00:50:16 crc kubenswrapper[4858]: I0218 00:50:16.508557 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-ndk6f" Feb 18 00:50:17 crc kubenswrapper[4858]: I0218 00:50:17.028262 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jshsv" event={"ID":"bdf005d6-d7be-493d-a062-3227e3d3b096","Type":"ContainerStarted","Data":"d33e3f20cf28755d54fb4cd528abdd629913c334d614693df0efcaf705be8e96"} Feb 18 00:50:17 crc kubenswrapper[4858]: I0218 00:50:17.063158 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jshsv" podStartSLOduration=2.648002159 podStartE2EDuration="5.063123101s" podCreationTimestamp="2026-02-18 00:50:12 +0000 UTC" firstStartedPulling="2026-02-18 00:50:13.990387734 +0000 UTC m=+967.296224466" lastFinishedPulling="2026-02-18 00:50:16.405508676 +0000 UTC m=+969.711345408" observedRunningTime="2026-02-18 00:50:17.057611846 +0000 UTC m=+970.363448578" watchObservedRunningTime="2026-02-18 00:50:17.063123101 +0000 UTC m=+970.368959883" Feb 18 00:50:17 crc kubenswrapper[4858]: I0218 00:50:17.368270 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-669759659c-2sgf5" Feb 18 00:50:17 crc kubenswrapper[4858]: I0218 00:50:17.432846 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bac6607-84f1-4287-b432-fbcc2247c032" path="/var/lib/kubelet/pods/5bac6607-84f1-4287-b432-fbcc2247c032/volumes" Feb 18 00:50:18 crc kubenswrapper[4858]: E0218 00:50:18.421211 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qqgpg" podUID="dda54f36-cfc8-468e-8101-f8041735931f" Feb 18 00:50:18 crc kubenswrapper[4858]: E0218 00:50:18.421337 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8v5bz" podUID="11bc7389-c53b-4030-892b-43da85d70fe1" Feb 18 00:50:19 crc kubenswrapper[4858]: I0218 00:50:19.050702 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn" event={"ID":"229552d0-e72e-49af-a4c7-6052e2a7bf5a","Type":"ContainerStarted","Data":"7b69d98d8d324512dfa1f647135f6f6872f92e409a45de7cf9ccd40c4e2a6816"} Feb 18 00:50:19 crc kubenswrapper[4858]: I0218 00:50:19.051202 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn" Feb 18 00:50:19 crc kubenswrapper[4858]: I0218 00:50:19.103327 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn" podStartSLOduration=17.876629799 podStartE2EDuration="45.103294235s" podCreationTimestamp="2026-02-18 00:49:34 +0000 UTC" firstStartedPulling="2026-02-18 00:49:51.583296443 +0000 UTC m=+944.889133175" lastFinishedPulling="2026-02-18 00:50:18.809960879 +0000 UTC m=+972.115797611" observedRunningTime="2026-02-18 00:50:19.091423036 +0000 UTC m=+972.397259858" watchObservedRunningTime="2026-02-18 00:50:19.103294235 +0000 UTC m=+972.409131007" Feb 18 00:50:21 crc kubenswrapper[4858]: E0218 00:50:21.421855 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dqvkf" podUID="b83c91fe-13d0-4711-9f90-3da887fa657d" Feb 18 00:50:23 crc kubenswrapper[4858]: I0218 00:50:23.161061 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jshsv" Feb 18 00:50:23 crc kubenswrapper[4858]: I0218 00:50:23.161568 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jshsv" Feb 18 00:50:23 crc kubenswrapper[4858]: I0218 00:50:23.238638 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jshsv" Feb 18 00:50:24 crc kubenswrapper[4858]: I0218 00:50:24.173786 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jshsv" Feb 18 00:50:24 crc kubenswrapper[4858]: I0218 00:50:24.236475 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jshsv"] Feb 18 00:50:26 crc kubenswrapper[4858]: I0218 00:50:26.118042 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jshsv" podUID="bdf005d6-d7be-493d-a062-3227e3d3b096" containerName="registry-server" containerID="cri-o://d33e3f20cf28755d54fb4cd528abdd629913c334d614693df0efcaf705be8e96" gracePeriod=2 Feb 18 00:50:26 crc kubenswrapper[4858]: I0218 00:50:26.765868 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jshsv" Feb 18 00:50:26 crc kubenswrapper[4858]: I0218 00:50:26.913930 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdf005d6-d7be-493d-a062-3227e3d3b096-utilities\") pod \"bdf005d6-d7be-493d-a062-3227e3d3b096\" (UID: \"bdf005d6-d7be-493d-a062-3227e3d3b096\") " Feb 18 00:50:26 crc kubenswrapper[4858]: I0218 00:50:26.914439 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdf005d6-d7be-493d-a062-3227e3d3b096-catalog-content\") pod \"bdf005d6-d7be-493d-a062-3227e3d3b096\" (UID: \"bdf005d6-d7be-493d-a062-3227e3d3b096\") " Feb 18 00:50:26 crc kubenswrapper[4858]: I0218 00:50:26.914735 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chcdd\" (UniqueName: \"kubernetes.io/projected/bdf005d6-d7be-493d-a062-3227e3d3b096-kube-api-access-chcdd\") pod \"bdf005d6-d7be-493d-a062-3227e3d3b096\" (UID: \"bdf005d6-d7be-493d-a062-3227e3d3b096\") " Feb 18 00:50:26 crc kubenswrapper[4858]: I0218 00:50:26.914819 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdf005d6-d7be-493d-a062-3227e3d3b096-utilities" (OuterVolumeSpecName: "utilities") pod "bdf005d6-d7be-493d-a062-3227e3d3b096" (UID: "bdf005d6-d7be-493d-a062-3227e3d3b096"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:50:26 crc kubenswrapper[4858]: I0218 00:50:26.921649 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdf005d6-d7be-493d-a062-3227e3d3b096-kube-api-access-chcdd" (OuterVolumeSpecName: "kube-api-access-chcdd") pod "bdf005d6-d7be-493d-a062-3227e3d3b096" (UID: "bdf005d6-d7be-493d-a062-3227e3d3b096"). InnerVolumeSpecName "kube-api-access-chcdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:50:26 crc kubenswrapper[4858]: I0218 00:50:26.958553 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdf005d6-d7be-493d-a062-3227e3d3b096-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bdf005d6-d7be-493d-a062-3227e3d3b096" (UID: "bdf005d6-d7be-493d-a062-3227e3d3b096"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:50:27 crc kubenswrapper[4858]: I0218 00:50:27.016405 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdf005d6-d7be-493d-a062-3227e3d3b096-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:27 crc kubenswrapper[4858]: I0218 00:50:27.016445 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdf005d6-d7be-493d-a062-3227e3d3b096-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:27 crc kubenswrapper[4858]: I0218 00:50:27.016459 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chcdd\" (UniqueName: \"kubernetes.io/projected/bdf005d6-d7be-493d-a062-3227e3d3b096-kube-api-access-chcdd\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:27 crc kubenswrapper[4858]: I0218 00:50:27.133109 4858 generic.go:334] "Generic (PLEG): container finished" podID="bdf005d6-d7be-493d-a062-3227e3d3b096" containerID="d33e3f20cf28755d54fb4cd528abdd629913c334d614693df0efcaf705be8e96" exitCode=0 Feb 18 00:50:27 crc kubenswrapper[4858]: I0218 00:50:27.133214 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jshsv" Feb 18 00:50:27 crc kubenswrapper[4858]: I0218 00:50:27.133218 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jshsv" event={"ID":"bdf005d6-d7be-493d-a062-3227e3d3b096","Type":"ContainerDied","Data":"d33e3f20cf28755d54fb4cd528abdd629913c334d614693df0efcaf705be8e96"} Feb 18 00:50:27 crc kubenswrapper[4858]: I0218 00:50:27.135453 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jshsv" event={"ID":"bdf005d6-d7be-493d-a062-3227e3d3b096","Type":"ContainerDied","Data":"90c370c5e499c0d525e5bf5e9206730586959d8ccead6a05dbf6ca6be95a1aab"} Feb 18 00:50:27 crc kubenswrapper[4858]: I0218 00:50:27.135533 4858 scope.go:117] "RemoveContainer" containerID="d33e3f20cf28755d54fb4cd528abdd629913c334d614693df0efcaf705be8e96" Feb 18 00:50:27 crc kubenswrapper[4858]: I0218 00:50:27.162827 4858 scope.go:117] "RemoveContainer" containerID="7b8017acdcf2ffe7993f149f08a34d8f6aa97677bb04921a4516c32f587738f6" Feb 18 00:50:27 crc kubenswrapper[4858]: I0218 00:50:27.195751 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jshsv"] Feb 18 00:50:27 crc kubenswrapper[4858]: I0218 00:50:27.204261 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jshsv"] Feb 18 00:50:27 crc kubenswrapper[4858]: I0218 00:50:27.208291 4858 scope.go:117] "RemoveContainer" containerID="19a79b575b86482f3f86c924c175034fce11b5709f660577f83eec857c4fcacd" Feb 18 00:50:27 crc kubenswrapper[4858]: I0218 00:50:27.241546 4858 scope.go:117] "RemoveContainer" containerID="d33e3f20cf28755d54fb4cd528abdd629913c334d614693df0efcaf705be8e96" Feb 18 00:50:27 crc kubenswrapper[4858]: E0218 00:50:27.242220 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d33e3f20cf28755d54fb4cd528abdd629913c334d614693df0efcaf705be8e96\": container with ID starting with d33e3f20cf28755d54fb4cd528abdd629913c334d614693df0efcaf705be8e96 not found: ID does not exist" containerID="d33e3f20cf28755d54fb4cd528abdd629913c334d614693df0efcaf705be8e96" Feb 18 00:50:27 crc kubenswrapper[4858]: I0218 00:50:27.242272 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d33e3f20cf28755d54fb4cd528abdd629913c334d614693df0efcaf705be8e96"} err="failed to get container status \"d33e3f20cf28755d54fb4cd528abdd629913c334d614693df0efcaf705be8e96\": rpc error: code = NotFound desc = could not find container \"d33e3f20cf28755d54fb4cd528abdd629913c334d614693df0efcaf705be8e96\": container with ID starting with d33e3f20cf28755d54fb4cd528abdd629913c334d614693df0efcaf705be8e96 not found: ID does not exist" Feb 18 00:50:27 crc kubenswrapper[4858]: I0218 00:50:27.242305 4858 scope.go:117] "RemoveContainer" containerID="7b8017acdcf2ffe7993f149f08a34d8f6aa97677bb04921a4516c32f587738f6" Feb 18 00:50:27 crc kubenswrapper[4858]: E0218 00:50:27.243103 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b8017acdcf2ffe7993f149f08a34d8f6aa97677bb04921a4516c32f587738f6\": container with ID starting with 7b8017acdcf2ffe7993f149f08a34d8f6aa97677bb04921a4516c32f587738f6 not found: ID does not exist" containerID="7b8017acdcf2ffe7993f149f08a34d8f6aa97677bb04921a4516c32f587738f6" Feb 18 00:50:27 crc kubenswrapper[4858]: I0218 00:50:27.243167 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b8017acdcf2ffe7993f149f08a34d8f6aa97677bb04921a4516c32f587738f6"} err="failed to get container status \"7b8017acdcf2ffe7993f149f08a34d8f6aa97677bb04921a4516c32f587738f6\": rpc error: code = NotFound desc = could not find container \"7b8017acdcf2ffe7993f149f08a34d8f6aa97677bb04921a4516c32f587738f6\": container with ID starting with 7b8017acdcf2ffe7993f149f08a34d8f6aa97677bb04921a4516c32f587738f6 not found: ID does not exist" Feb 18 00:50:27 crc kubenswrapper[4858]: I0218 00:50:27.243216 4858 scope.go:117] "RemoveContainer" containerID="19a79b575b86482f3f86c924c175034fce11b5709f660577f83eec857c4fcacd" Feb 18 00:50:27 crc kubenswrapper[4858]: E0218 00:50:27.251512 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19a79b575b86482f3f86c924c175034fce11b5709f660577f83eec857c4fcacd\": container with ID starting with 19a79b575b86482f3f86c924c175034fce11b5709f660577f83eec857c4fcacd not found: ID does not exist" containerID="19a79b575b86482f3f86c924c175034fce11b5709f660577f83eec857c4fcacd" Feb 18 00:50:27 crc kubenswrapper[4858]: I0218 00:50:27.251971 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19a79b575b86482f3f86c924c175034fce11b5709f660577f83eec857c4fcacd"} err="failed to get container status \"19a79b575b86482f3f86c924c175034fce11b5709f660577f83eec857c4fcacd\": rpc error: code = NotFound desc = could not find container \"19a79b575b86482f3f86c924c175034fce11b5709f660577f83eec857c4fcacd\": container with ID starting with 19a79b575b86482f3f86c924c175034fce11b5709f660577f83eec857c4fcacd not found: ID does not exist" Feb 18 00:50:27 crc kubenswrapper[4858]: I0218 00:50:27.432808 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdf005d6-d7be-493d-a062-3227e3d3b096" path="/var/lib/kubelet/pods/bdf005d6-d7be-493d-a062-3227e3d3b096/volumes" Feb 18 00:50:30 crc kubenswrapper[4858]: I0218 00:50:30.172908 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8v5bz" event={"ID":"11bc7389-c53b-4030-892b-43da85d70fe1","Type":"ContainerStarted","Data":"c4837d6769ce7563626cc6c0c15e2f784b5a346461dd2122d9edc9c400b2ee7b"} Feb 18 00:50:30 crc kubenswrapper[4858]: I0218 00:50:30.174295 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8v5bz" Feb 18 00:50:30 crc kubenswrapper[4858]: I0218 00:50:30.199095 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8v5bz" podStartSLOduration=2.695581754 podStartE2EDuration="56.19906763s" podCreationTimestamp="2026-02-18 00:49:34 +0000 UTC" firstStartedPulling="2026-02-18 00:49:36.405308152 +0000 UTC m=+929.711144884" lastFinishedPulling="2026-02-18 00:50:29.908794038 +0000 UTC m=+983.214630760" observedRunningTime="2026-02-18 00:50:30.195321559 +0000 UTC m=+983.501158371" watchObservedRunningTime="2026-02-18 00:50:30.19906763 +0000 UTC m=+983.504904402" Feb 18 00:50:30 crc kubenswrapper[4858]: I0218 00:50:30.730829 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn" Feb 18 00:50:31 crc kubenswrapper[4858]: I0218 00:50:31.501382 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nld4p"] Feb 18 00:50:31 crc kubenswrapper[4858]: E0218 00:50:31.501673 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf005d6-d7be-493d-a062-3227e3d3b096" containerName="extract-content" Feb 18 00:50:31 crc kubenswrapper[4858]: I0218 00:50:31.501685 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf005d6-d7be-493d-a062-3227e3d3b096" containerName="extract-content" Feb 18 00:50:31 crc kubenswrapper[4858]: E0218 00:50:31.501699 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bac6607-84f1-4287-b432-fbcc2247c032" containerName="extract-content" Feb 18 00:50:31 crc kubenswrapper[4858]: I0218 00:50:31.501705 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bac6607-84f1-4287-b432-fbcc2247c032" containerName="extract-content" Feb 18 00:50:31 crc kubenswrapper[4858]: E0218 00:50:31.501720 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bac6607-84f1-4287-b432-fbcc2247c032" containerName="extract-utilities" Feb 18 00:50:31 crc kubenswrapper[4858]: I0218 00:50:31.501726 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bac6607-84f1-4287-b432-fbcc2247c032" containerName="extract-utilities" Feb 18 00:50:31 crc kubenswrapper[4858]: E0218 00:50:31.501743 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bac6607-84f1-4287-b432-fbcc2247c032" containerName="registry-server" Feb 18 00:50:31 crc kubenswrapper[4858]: I0218 00:50:31.501749 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bac6607-84f1-4287-b432-fbcc2247c032" containerName="registry-server" Feb 18 00:50:31 crc kubenswrapper[4858]: E0218 00:50:31.501758 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf005d6-d7be-493d-a062-3227e3d3b096" containerName="extract-utilities" Feb 18 00:50:31 crc kubenswrapper[4858]: I0218 00:50:31.501763 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf005d6-d7be-493d-a062-3227e3d3b096" containerName="extract-utilities" Feb 18 00:50:31 crc kubenswrapper[4858]: E0218 00:50:31.501774 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf005d6-d7be-493d-a062-3227e3d3b096" containerName="registry-server" Feb 18 00:50:31 crc kubenswrapper[4858]: I0218 00:50:31.501781 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf005d6-d7be-493d-a062-3227e3d3b096" containerName="registry-server" Feb 18 00:50:31 crc kubenswrapper[4858]: I0218 00:50:31.501900 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bac6607-84f1-4287-b432-fbcc2247c032" containerName="registry-server" Feb 18 00:50:31 crc kubenswrapper[4858]: I0218 00:50:31.501908 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdf005d6-d7be-493d-a062-3227e3d3b096" containerName="registry-server" Feb 18 00:50:31 crc kubenswrapper[4858]: I0218 00:50:31.510833 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nld4p" Feb 18 00:50:31 crc kubenswrapper[4858]: I0218 00:50:31.522269 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nld4p"] Feb 18 00:50:31 crc kubenswrapper[4858]: I0218 00:50:31.595570 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c524384b-90b0-4eab-8f7a-68ec57d36628-utilities\") pod \"community-operators-nld4p\" (UID: \"c524384b-90b0-4eab-8f7a-68ec57d36628\") " pod="openshift-marketplace/community-operators-nld4p" Feb 18 00:50:31 crc kubenswrapper[4858]: I0218 00:50:31.595889 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c524384b-90b0-4eab-8f7a-68ec57d36628-catalog-content\") pod \"community-operators-nld4p\" (UID: \"c524384b-90b0-4eab-8f7a-68ec57d36628\") " pod="openshift-marketplace/community-operators-nld4p" Feb 18 00:50:31 crc kubenswrapper[4858]: I0218 00:50:31.596022 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gnhk\" (UniqueName: \"kubernetes.io/projected/c524384b-90b0-4eab-8f7a-68ec57d36628-kube-api-access-9gnhk\") pod \"community-operators-nld4p\" (UID: \"c524384b-90b0-4eab-8f7a-68ec57d36628\") " pod="openshift-marketplace/community-operators-nld4p" Feb 18 00:50:31 crc kubenswrapper[4858]: I0218 00:50:31.697427 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c524384b-90b0-4eab-8f7a-68ec57d36628-catalog-content\") pod \"community-operators-nld4p\" (UID: \"c524384b-90b0-4eab-8f7a-68ec57d36628\") " pod="openshift-marketplace/community-operators-nld4p" Feb 18 00:50:31 crc kubenswrapper[4858]: I0218 00:50:31.697526 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gnhk\" (UniqueName: \"kubernetes.io/projected/c524384b-90b0-4eab-8f7a-68ec57d36628-kube-api-access-9gnhk\") pod \"community-operators-nld4p\" (UID: \"c524384b-90b0-4eab-8f7a-68ec57d36628\") " pod="openshift-marketplace/community-operators-nld4p" Feb 18 00:50:31 crc kubenswrapper[4858]: I0218 00:50:31.697557 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c524384b-90b0-4eab-8f7a-68ec57d36628-utilities\") pod \"community-operators-nld4p\" (UID: \"c524384b-90b0-4eab-8f7a-68ec57d36628\") " pod="openshift-marketplace/community-operators-nld4p" Feb 18 00:50:31 crc kubenswrapper[4858]: I0218 00:50:31.697939 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c524384b-90b0-4eab-8f7a-68ec57d36628-utilities\") pod \"community-operators-nld4p\" (UID: \"c524384b-90b0-4eab-8f7a-68ec57d36628\") " pod="openshift-marketplace/community-operators-nld4p" Feb 18 00:50:31 crc kubenswrapper[4858]: I0218 00:50:31.698146 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c524384b-90b0-4eab-8f7a-68ec57d36628-catalog-content\") pod \"community-operators-nld4p\" (UID: \"c524384b-90b0-4eab-8f7a-68ec57d36628\") " pod="openshift-marketplace/community-operators-nld4p" Feb 18 00:50:31 crc kubenswrapper[4858]: I0218 00:50:31.716141 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gnhk\" (UniqueName: \"kubernetes.io/projected/c524384b-90b0-4eab-8f7a-68ec57d36628-kube-api-access-9gnhk\") pod \"community-operators-nld4p\" (UID: \"c524384b-90b0-4eab-8f7a-68ec57d36628\") " pod="openshift-marketplace/community-operators-nld4p" Feb 18 00:50:31 crc kubenswrapper[4858]: I0218 00:50:31.833217 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nld4p" Feb 18 00:50:32 crc kubenswrapper[4858]: I0218 00:50:32.311234 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nld4p"] Feb 18 00:50:32 crc kubenswrapper[4858]: W0218 00:50:32.320136 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc524384b_90b0_4eab_8f7a_68ec57d36628.slice/crio-c6698dd7014613b55143a8e0361f74da2340beb97220104204648797d75371e9 WatchSource:0}: Error finding container c6698dd7014613b55143a8e0361f74da2340beb97220104204648797d75371e9: Status 404 returned error can't find the container with id c6698dd7014613b55143a8e0361f74da2340beb97220104204648797d75371e9 Feb 18 00:50:33 crc kubenswrapper[4858]: I0218 00:50:33.205409 4858 generic.go:334] "Generic (PLEG): container finished" podID="c524384b-90b0-4eab-8f7a-68ec57d36628" containerID="e6c4f27d259a1df23afa442ed023911b9c0da6d3ff24b20ec43110251b942196" exitCode=0 Feb 18 00:50:33 crc kubenswrapper[4858]: I0218 00:50:33.205525 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nld4p" event={"ID":"c524384b-90b0-4eab-8f7a-68ec57d36628","Type":"ContainerDied","Data":"e6c4f27d259a1df23afa442ed023911b9c0da6d3ff24b20ec43110251b942196"} Feb 18 00:50:33 crc kubenswrapper[4858]: I0218 00:50:33.205907 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nld4p" event={"ID":"c524384b-90b0-4eab-8f7a-68ec57d36628","Type":"ContainerStarted","Data":"c6698dd7014613b55143a8e0361f74da2340beb97220104204648797d75371e9"} Feb 18 00:50:34 crc kubenswrapper[4858]: I0218 00:50:34.215916 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qqgpg" event={"ID":"dda54f36-cfc8-468e-8101-f8041735931f","Type":"ContainerStarted","Data":"ecd503cf301023cc46b1c44aaa52c18f896259f90c94a686fffce79f0a230c0f"} Feb 18 00:50:34 crc kubenswrapper[4858]: I0218 00:50:34.217393 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qqgpg" Feb 18 00:50:34 crc kubenswrapper[4858]: I0218 00:50:34.218392 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nld4p" event={"ID":"c524384b-90b0-4eab-8f7a-68ec57d36628","Type":"ContainerStarted","Data":"700b4c65ddf8a1e006491de797357086c8d3ceacf595c7ee92dd59bafb07e34a"} Feb 18 00:50:34 crc kubenswrapper[4858]: I0218 00:50:34.234952 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qqgpg" podStartSLOduration=2.800241908 podStartE2EDuration="1m0.234932504s" podCreationTimestamp="2026-02-18 00:49:34 +0000 UTC" firstStartedPulling="2026-02-18 00:49:36.403466127 +0000 UTC m=+929.709302859" lastFinishedPulling="2026-02-18 00:50:33.838156693 +0000 UTC m=+987.143993455" observedRunningTime="2026-02-18 00:50:34.230572367 +0000 UTC m=+987.536409129" watchObservedRunningTime="2026-02-18 00:50:34.234932504 +0000 UTC m=+987.540769236" Feb 18 00:50:35 crc kubenswrapper[4858]: I0218 00:50:35.036775 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-8v5bz" Feb 18 00:50:35 crc kubenswrapper[4858]: I0218 00:50:35.232144 4858 generic.go:334] "Generic (PLEG): container finished" podID="c524384b-90b0-4eab-8f7a-68ec57d36628" containerID="700b4c65ddf8a1e006491de797357086c8d3ceacf595c7ee92dd59bafb07e34a" exitCode=0 Feb 18 00:50:35 crc kubenswrapper[4858]: I0218 00:50:35.232208 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nld4p" event={"ID":"c524384b-90b0-4eab-8f7a-68ec57d36628","Type":"ContainerDied","Data":"700b4c65ddf8a1e006491de797357086c8d3ceacf595c7ee92dd59bafb07e34a"} Feb 18 00:50:36 crc kubenswrapper[4858]: I0218 00:50:36.256020 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nld4p" event={"ID":"c524384b-90b0-4eab-8f7a-68ec57d36628","Type":"ContainerStarted","Data":"aa833292a46220e3536266bb511efd8182d87320c1281597ddfd4bf9b5b6e5dc"} Feb 18 00:50:36 crc kubenswrapper[4858]: I0218 00:50:36.286012 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nld4p" podStartSLOduration=2.842212943 podStartE2EDuration="5.285986492s" podCreationTimestamp="2026-02-18 00:50:31 +0000 UTC" firstStartedPulling="2026-02-18 00:50:33.208584124 +0000 UTC m=+986.514420896" lastFinishedPulling="2026-02-18 00:50:35.652357703 +0000 UTC m=+988.958194445" observedRunningTime="2026-02-18 00:50:36.279809711 +0000 UTC m=+989.585646473" watchObservedRunningTime="2026-02-18 00:50:36.285986492 +0000 UTC m=+989.591823264" Feb 18 00:50:37 crc kubenswrapper[4858]: I0218 00:50:37.266034 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dqvkf" event={"ID":"b83c91fe-13d0-4711-9f90-3da887fa657d","Type":"ContainerStarted","Data":"fd871dd0452c212a87aff5f4eff0423a1c540fe594bfbf2ae705ffa197dad64c"} Feb 18 00:50:37 crc kubenswrapper[4858]: I0218 00:50:37.287434 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dqvkf" podStartSLOduration=2.878172719 podStartE2EDuration="1m3.287413454s" podCreationTimestamp="2026-02-18 00:49:34 +0000 UTC" firstStartedPulling="2026-02-18 00:49:36.409718329 +0000 UTC m=+929.715555061" lastFinishedPulling="2026-02-18 00:50:36.818959064 +0000 UTC m=+990.124795796" observedRunningTime="2026-02-18 00:50:37.28478601 +0000 UTC m=+990.590622732" watchObservedRunningTime="2026-02-18 00:50:37.287413454 +0000 UTC m=+990.593250196" Feb 18 00:50:41 crc kubenswrapper[4858]: I0218 00:50:41.833636 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nld4p" Feb 18 00:50:41 crc kubenswrapper[4858]: I0218 00:50:41.834299 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nld4p" Feb 18 00:50:41 crc kubenswrapper[4858]: I0218 00:50:41.913677 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nld4p" Feb 18 00:50:42 crc kubenswrapper[4858]: I0218 00:50:42.406097 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nld4p" Feb 18 00:50:42 crc kubenswrapper[4858]: I0218 00:50:42.478247 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nld4p"] Feb 18 00:50:44 crc kubenswrapper[4858]: I0218 00:50:44.347457 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nld4p" podUID="c524384b-90b0-4eab-8f7a-68ec57d36628" containerName="registry-server" containerID="cri-o://aa833292a46220e3536266bb511efd8182d87320c1281597ddfd4bf9b5b6e5dc" gracePeriod=2 Feb 18 00:50:44 crc kubenswrapper[4858]: I0218 00:50:44.854076 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nld4p" Feb 18 00:50:44 crc kubenswrapper[4858]: I0218 00:50:44.911792 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gnhk\" (UniqueName: \"kubernetes.io/projected/c524384b-90b0-4eab-8f7a-68ec57d36628-kube-api-access-9gnhk\") pod \"c524384b-90b0-4eab-8f7a-68ec57d36628\" (UID: \"c524384b-90b0-4eab-8f7a-68ec57d36628\") " Feb 18 00:50:44 crc kubenswrapper[4858]: I0218 00:50:44.911921 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c524384b-90b0-4eab-8f7a-68ec57d36628-utilities\") pod \"c524384b-90b0-4eab-8f7a-68ec57d36628\" (UID: \"c524384b-90b0-4eab-8f7a-68ec57d36628\") " Feb 18 00:50:44 crc kubenswrapper[4858]: I0218 00:50:44.911950 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c524384b-90b0-4eab-8f7a-68ec57d36628-catalog-content\") pod \"c524384b-90b0-4eab-8f7a-68ec57d36628\" (UID: \"c524384b-90b0-4eab-8f7a-68ec57d36628\") " Feb 18 00:50:44 crc kubenswrapper[4858]: I0218 00:50:44.913063 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c524384b-90b0-4eab-8f7a-68ec57d36628-utilities" (OuterVolumeSpecName: "utilities") pod "c524384b-90b0-4eab-8f7a-68ec57d36628" (UID: "c524384b-90b0-4eab-8f7a-68ec57d36628"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:50:44 crc kubenswrapper[4858]: I0218 00:50:44.918997 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c524384b-90b0-4eab-8f7a-68ec57d36628-kube-api-access-9gnhk" (OuterVolumeSpecName: "kube-api-access-9gnhk") pod "c524384b-90b0-4eab-8f7a-68ec57d36628" (UID: "c524384b-90b0-4eab-8f7a-68ec57d36628"). InnerVolumeSpecName "kube-api-access-9gnhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:50:45 crc kubenswrapper[4858]: I0218 00:50:45.013837 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gnhk\" (UniqueName: \"kubernetes.io/projected/c524384b-90b0-4eab-8f7a-68ec57d36628-kube-api-access-9gnhk\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:45 crc kubenswrapper[4858]: I0218 00:50:45.014126 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c524384b-90b0-4eab-8f7a-68ec57d36628-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:45 crc kubenswrapper[4858]: I0218 00:50:45.025533 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qqgpg" Feb 18 00:50:45 crc kubenswrapper[4858]: I0218 00:50:45.087566 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c524384b-90b0-4eab-8f7a-68ec57d36628-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c524384b-90b0-4eab-8f7a-68ec57d36628" (UID: "c524384b-90b0-4eab-8f7a-68ec57d36628"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:50:45 crc kubenswrapper[4858]: I0218 00:50:45.115598 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c524384b-90b0-4eab-8f7a-68ec57d36628-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:45 crc kubenswrapper[4858]: I0218 00:50:45.360211 4858 generic.go:334] "Generic (PLEG): container finished" podID="c524384b-90b0-4eab-8f7a-68ec57d36628" containerID="aa833292a46220e3536266bb511efd8182d87320c1281597ddfd4bf9b5b6e5dc" exitCode=0 Feb 18 00:50:45 crc kubenswrapper[4858]: I0218 00:50:45.360290 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nld4p" event={"ID":"c524384b-90b0-4eab-8f7a-68ec57d36628","Type":"ContainerDied","Data":"aa833292a46220e3536266bb511efd8182d87320c1281597ddfd4bf9b5b6e5dc"} Feb 18 00:50:45 crc kubenswrapper[4858]: I0218 00:50:45.360339 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nld4p" event={"ID":"c524384b-90b0-4eab-8f7a-68ec57d36628","Type":"ContainerDied","Data":"c6698dd7014613b55143a8e0361f74da2340beb97220104204648797d75371e9"} Feb 18 00:50:45 crc kubenswrapper[4858]: I0218 00:50:45.360374 4858 scope.go:117] "RemoveContainer" containerID="aa833292a46220e3536266bb511efd8182d87320c1281597ddfd4bf9b5b6e5dc" Feb 18 00:50:45 crc kubenswrapper[4858]: I0218 00:50:45.360658 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nld4p" Feb 18 00:50:45 crc kubenswrapper[4858]: I0218 00:50:45.409288 4858 scope.go:117] "RemoveContainer" containerID="700b4c65ddf8a1e006491de797357086c8d3ceacf595c7ee92dd59bafb07e34a" Feb 18 00:50:45 crc kubenswrapper[4858]: I0218 00:50:45.437036 4858 scope.go:117] "RemoveContainer" containerID="e6c4f27d259a1df23afa442ed023911b9c0da6d3ff24b20ec43110251b942196" Feb 18 00:50:45 crc kubenswrapper[4858]: I0218 00:50:45.442462 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nld4p"] Feb 18 00:50:45 crc kubenswrapper[4858]: I0218 00:50:45.442531 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nld4p"] Feb 18 00:50:45 crc kubenswrapper[4858]: I0218 00:50:45.480906 4858 scope.go:117] "RemoveContainer" containerID="aa833292a46220e3536266bb511efd8182d87320c1281597ddfd4bf9b5b6e5dc" Feb 18 00:50:45 crc kubenswrapper[4858]: E0218 00:50:45.481396 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa833292a46220e3536266bb511efd8182d87320c1281597ddfd4bf9b5b6e5dc\": container with ID starting with aa833292a46220e3536266bb511efd8182d87320c1281597ddfd4bf9b5b6e5dc not found: ID does not exist" containerID="aa833292a46220e3536266bb511efd8182d87320c1281597ddfd4bf9b5b6e5dc" Feb 18 00:50:45 crc kubenswrapper[4858]: I0218 00:50:45.481442 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa833292a46220e3536266bb511efd8182d87320c1281597ddfd4bf9b5b6e5dc"} err="failed to get container status \"aa833292a46220e3536266bb511efd8182d87320c1281597ddfd4bf9b5b6e5dc\": rpc error: code = NotFound desc = could not find container \"aa833292a46220e3536266bb511efd8182d87320c1281597ddfd4bf9b5b6e5dc\": container with ID starting with aa833292a46220e3536266bb511efd8182d87320c1281597ddfd4bf9b5b6e5dc not found: ID does not exist" Feb 18 00:50:45 crc kubenswrapper[4858]: I0218 00:50:45.481473 4858 scope.go:117] "RemoveContainer" containerID="700b4c65ddf8a1e006491de797357086c8d3ceacf595c7ee92dd59bafb07e34a" Feb 18 00:50:45 crc kubenswrapper[4858]: E0218 00:50:45.482021 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"700b4c65ddf8a1e006491de797357086c8d3ceacf595c7ee92dd59bafb07e34a\": container with ID starting with 700b4c65ddf8a1e006491de797357086c8d3ceacf595c7ee92dd59bafb07e34a not found: ID does not exist" containerID="700b4c65ddf8a1e006491de797357086c8d3ceacf595c7ee92dd59bafb07e34a" Feb 18 00:50:45 crc kubenswrapper[4858]: I0218 00:50:45.482053 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"700b4c65ddf8a1e006491de797357086c8d3ceacf595c7ee92dd59bafb07e34a"} err="failed to get container status \"700b4c65ddf8a1e006491de797357086c8d3ceacf595c7ee92dd59bafb07e34a\": rpc error: code = NotFound desc = could not find container \"700b4c65ddf8a1e006491de797357086c8d3ceacf595c7ee92dd59bafb07e34a\": container with ID starting with 700b4c65ddf8a1e006491de797357086c8d3ceacf595c7ee92dd59bafb07e34a not found: ID does not exist" Feb 18 00:50:45 crc kubenswrapper[4858]: I0218 00:50:45.482070 4858 scope.go:117] "RemoveContainer" containerID="e6c4f27d259a1df23afa442ed023911b9c0da6d3ff24b20ec43110251b942196" Feb 18 00:50:45 crc kubenswrapper[4858]: E0218 00:50:45.482513 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6c4f27d259a1df23afa442ed023911b9c0da6d3ff24b20ec43110251b942196\": container with ID starting with e6c4f27d259a1df23afa442ed023911b9c0da6d3ff24b20ec43110251b942196 not found: ID does not exist" containerID="e6c4f27d259a1df23afa442ed023911b9c0da6d3ff24b20ec43110251b942196" Feb 18 00:50:45 crc kubenswrapper[4858]: I0218 00:50:45.482543 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6c4f27d259a1df23afa442ed023911b9c0da6d3ff24b20ec43110251b942196"} err="failed to get container status \"e6c4f27d259a1df23afa442ed023911b9c0da6d3ff24b20ec43110251b942196\": rpc error: code = NotFound desc = could not find container \"e6c4f27d259a1df23afa442ed023911b9c0da6d3ff24b20ec43110251b942196\": container with ID starting with e6c4f27d259a1df23afa442ed023911b9c0da6d3ff24b20ec43110251b942196 not found: ID does not exist" Feb 18 00:50:47 crc kubenswrapper[4858]: I0218 00:50:47.438886 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c524384b-90b0-4eab-8f7a-68ec57d36628" path="/var/lib/kubelet/pods/c524384b-90b0-4eab-8f7a-68ec57d36628/volumes" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.114763 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-g62s8"] Feb 18 00:51:06 crc kubenswrapper[4858]: E0218 00:51:06.115596 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c524384b-90b0-4eab-8f7a-68ec57d36628" containerName="extract-utilities" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.115610 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c524384b-90b0-4eab-8f7a-68ec57d36628" containerName="extract-utilities" Feb 18 00:51:06 crc kubenswrapper[4858]: E0218 00:51:06.115630 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c524384b-90b0-4eab-8f7a-68ec57d36628" containerName="extract-content" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.115636 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c524384b-90b0-4eab-8f7a-68ec57d36628" containerName="extract-content" Feb 18 00:51:06 crc kubenswrapper[4858]: E0218 00:51:06.115646 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c524384b-90b0-4eab-8f7a-68ec57d36628" containerName="registry-server" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.115652 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c524384b-90b0-4eab-8f7a-68ec57d36628" containerName="registry-server" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.115801 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c524384b-90b0-4eab-8f7a-68ec57d36628" containerName="registry-server" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.116530 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-g62s8" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.119546 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.119712 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.119836 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-l2n4d" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.119980 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.128043 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-g62s8"] Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.172628 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9befe7db-2687-4b07-ab13-9763231c95c3-config\") pod \"dnsmasq-dns-675f4bcbfc-g62s8\" (UID: \"9befe7db-2687-4b07-ab13-9763231c95c3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-g62s8" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.172702 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bgp7\" (UniqueName: \"kubernetes.io/projected/9befe7db-2687-4b07-ab13-9763231c95c3-kube-api-access-2bgp7\") pod \"dnsmasq-dns-675f4bcbfc-g62s8\" (UID: \"9befe7db-2687-4b07-ab13-9763231c95c3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-g62s8" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.174865 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-96fdg"] Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.176326 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-96fdg" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.179882 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.205987 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-96fdg"] Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.273929 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-592zz\" (UniqueName: \"kubernetes.io/projected/1dc7085a-564b-462e-8853-0c15a3d00f66-kube-api-access-592zz\") pod \"dnsmasq-dns-78dd6ddcc-96fdg\" (UID: \"1dc7085a-564b-462e-8853-0c15a3d00f66\") " pod="openstack/dnsmasq-dns-78dd6ddcc-96fdg" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.273998 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dc7085a-564b-462e-8853-0c15a3d00f66-config\") pod \"dnsmasq-dns-78dd6ddcc-96fdg\" (UID: \"1dc7085a-564b-462e-8853-0c15a3d00f66\") " pod="openstack/dnsmasq-dns-78dd6ddcc-96fdg" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.274086 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9befe7db-2687-4b07-ab13-9763231c95c3-config\") pod \"dnsmasq-dns-675f4bcbfc-g62s8\" (UID: \"9befe7db-2687-4b07-ab13-9763231c95c3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-g62s8" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.274131 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bgp7\" (UniqueName: \"kubernetes.io/projected/9befe7db-2687-4b07-ab13-9763231c95c3-kube-api-access-2bgp7\") pod \"dnsmasq-dns-675f4bcbfc-g62s8\" (UID: \"9befe7db-2687-4b07-ab13-9763231c95c3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-g62s8" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.274165 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dc7085a-564b-462e-8853-0c15a3d00f66-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-96fdg\" (UID: \"1dc7085a-564b-462e-8853-0c15a3d00f66\") " pod="openstack/dnsmasq-dns-78dd6ddcc-96fdg" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.275678 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9befe7db-2687-4b07-ab13-9763231c95c3-config\") pod \"dnsmasq-dns-675f4bcbfc-g62s8\" (UID: \"9befe7db-2687-4b07-ab13-9763231c95c3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-g62s8" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.296470 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bgp7\" (UniqueName: \"kubernetes.io/projected/9befe7db-2687-4b07-ab13-9763231c95c3-kube-api-access-2bgp7\") pod \"dnsmasq-dns-675f4bcbfc-g62s8\" (UID: \"9befe7db-2687-4b07-ab13-9763231c95c3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-g62s8" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.374643 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dc7085a-564b-462e-8853-0c15a3d00f66-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-96fdg\" (UID: \"1dc7085a-564b-462e-8853-0c15a3d00f66\") " pod="openstack/dnsmasq-dns-78dd6ddcc-96fdg" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.374708 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-592zz\" (UniqueName: \"kubernetes.io/projected/1dc7085a-564b-462e-8853-0c15a3d00f66-kube-api-access-592zz\") pod \"dnsmasq-dns-78dd6ddcc-96fdg\" (UID: \"1dc7085a-564b-462e-8853-0c15a3d00f66\") " pod="openstack/dnsmasq-dns-78dd6ddcc-96fdg" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.374742 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dc7085a-564b-462e-8853-0c15a3d00f66-config\") pod \"dnsmasq-dns-78dd6ddcc-96fdg\" (UID: \"1dc7085a-564b-462e-8853-0c15a3d00f66\") " pod="openstack/dnsmasq-dns-78dd6ddcc-96fdg" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.375778 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dc7085a-564b-462e-8853-0c15a3d00f66-config\") pod \"dnsmasq-dns-78dd6ddcc-96fdg\" (UID: \"1dc7085a-564b-462e-8853-0c15a3d00f66\") " pod="openstack/dnsmasq-dns-78dd6ddcc-96fdg" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.375852 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dc7085a-564b-462e-8853-0c15a3d00f66-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-96fdg\" (UID: \"1dc7085a-564b-462e-8853-0c15a3d00f66\") " pod="openstack/dnsmasq-dns-78dd6ddcc-96fdg" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.392512 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-592zz\" (UniqueName: \"kubernetes.io/projected/1dc7085a-564b-462e-8853-0c15a3d00f66-kube-api-access-592zz\") pod \"dnsmasq-dns-78dd6ddcc-96fdg\" (UID: \"1dc7085a-564b-462e-8853-0c15a3d00f66\") " pod="openstack/dnsmasq-dns-78dd6ddcc-96fdg" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.441463 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-g62s8" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.501159 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-96fdg" Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.882591 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-g62s8"] Feb 18 00:51:06 crc kubenswrapper[4858]: I0218 00:51:06.977846 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-96fdg"] Feb 18 00:51:06 crc kubenswrapper[4858]: W0218 00:51:06.980981 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1dc7085a_564b_462e_8853_0c15a3d00f66.slice/crio-c6f67b3ca519fd8e758037cb383dd0bf69667222e37200c0e50159b6f4652fef WatchSource:0}: Error finding container c6f67b3ca519fd8e758037cb383dd0bf69667222e37200c0e50159b6f4652fef: Status 404 returned error can't find the container with id c6f67b3ca519fd8e758037cb383dd0bf69667222e37200c0e50159b6f4652fef Feb 18 00:51:07 crc kubenswrapper[4858]: I0218 00:51:07.598232 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-g62s8" event={"ID":"9befe7db-2687-4b07-ab13-9763231c95c3","Type":"ContainerStarted","Data":"82a380621fad6a9cd8e9f105b8b6cc996e49fedb14dd9da5efb9b45c76b675ba"} Feb 18 00:51:07 crc kubenswrapper[4858]: I0218 00:51:07.600100 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-96fdg" event={"ID":"1dc7085a-564b-462e-8853-0c15a3d00f66","Type":"ContainerStarted","Data":"c6f67b3ca519fd8e758037cb383dd0bf69667222e37200c0e50159b6f4652fef"} Feb 18 00:51:08 crc kubenswrapper[4858]: I0218 00:51:08.857449 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-g62s8"] Feb 18 00:51:08 crc kubenswrapper[4858]: I0218 00:51:08.886778 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nsq62"] Feb 18 00:51:08 crc kubenswrapper[4858]: I0218 00:51:08.887861 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-nsq62" Feb 18 00:51:08 crc kubenswrapper[4858]: I0218 00:51:08.898977 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nsq62"] Feb 18 00:51:09 crc kubenswrapper[4858]: I0218 00:51:09.013376 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88738498-130e-438a-a822-f9946add222c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-nsq62\" (UID: \"88738498-130e-438a-a822-f9946add222c\") " pod="openstack/dnsmasq-dns-666b6646f7-nsq62" Feb 18 00:51:09 crc kubenswrapper[4858]: I0218 00:51:09.013421 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvqmm\" (UniqueName: \"kubernetes.io/projected/88738498-130e-438a-a822-f9946add222c-kube-api-access-bvqmm\") pod \"dnsmasq-dns-666b6646f7-nsq62\" (UID: \"88738498-130e-438a-a822-f9946add222c\") " pod="openstack/dnsmasq-dns-666b6646f7-nsq62" Feb 18 00:51:09 crc kubenswrapper[4858]: I0218 00:51:09.013515 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88738498-130e-438a-a822-f9946add222c-config\") pod \"dnsmasq-dns-666b6646f7-nsq62\" (UID: \"88738498-130e-438a-a822-f9946add222c\") " pod="openstack/dnsmasq-dns-666b6646f7-nsq62" Feb 18 00:51:09 crc kubenswrapper[4858]: I0218 00:51:09.117571 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88738498-130e-438a-a822-f9946add222c-config\") pod \"dnsmasq-dns-666b6646f7-nsq62\" (UID: \"88738498-130e-438a-a822-f9946add222c\") " pod="openstack/dnsmasq-dns-666b6646f7-nsq62" Feb 18 00:51:09 crc kubenswrapper[4858]: I0218 00:51:09.117646 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88738498-130e-438a-a822-f9946add222c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-nsq62\" (UID: \"88738498-130e-438a-a822-f9946add222c\") " pod="openstack/dnsmasq-dns-666b6646f7-nsq62" Feb 18 00:51:09 crc kubenswrapper[4858]: I0218 00:51:09.117673 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvqmm\" (UniqueName: \"kubernetes.io/projected/88738498-130e-438a-a822-f9946add222c-kube-api-access-bvqmm\") pod \"dnsmasq-dns-666b6646f7-nsq62\" (UID: \"88738498-130e-438a-a822-f9946add222c\") " pod="openstack/dnsmasq-dns-666b6646f7-nsq62" Feb 18 00:51:09 crc kubenswrapper[4858]: I0218 00:51:09.118751 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88738498-130e-438a-a822-f9946add222c-config\") pod \"dnsmasq-dns-666b6646f7-nsq62\" (UID: \"88738498-130e-438a-a822-f9946add222c\") " pod="openstack/dnsmasq-dns-666b6646f7-nsq62" Feb 18 00:51:09 crc kubenswrapper[4858]: I0218 00:51:09.119200 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88738498-130e-438a-a822-f9946add222c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-nsq62\" (UID: \"88738498-130e-438a-a822-f9946add222c\") " pod="openstack/dnsmasq-dns-666b6646f7-nsq62" Feb 18 00:51:09 crc kubenswrapper[4858]: I0218 00:51:09.136062 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-96fdg"] Feb 18 00:51:09 crc kubenswrapper[4858]: I0218 00:51:09.137129 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvqmm\" (UniqueName: \"kubernetes.io/projected/88738498-130e-438a-a822-f9946add222c-kube-api-access-bvqmm\") pod \"dnsmasq-dns-666b6646f7-nsq62\" (UID: \"88738498-130e-438a-a822-f9946add222c\") " pod="openstack/dnsmasq-dns-666b6646f7-nsq62" Feb 18 00:51:09 crc kubenswrapper[4858]: I0218 00:51:09.172563 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7gzrt"] Feb 18 00:51:09 crc kubenswrapper[4858]: I0218 00:51:09.173803 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7gzrt" Feb 18 00:51:09 crc kubenswrapper[4858]: I0218 00:51:09.176920 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7gzrt"] Feb 18 00:51:09 crc kubenswrapper[4858]: I0218 00:51:09.214515 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-nsq62" Feb 18 00:51:09 crc kubenswrapper[4858]: I0218 00:51:09.320701 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1783fb29-f6d7-47ae-8320-863d18857042-config\") pod \"dnsmasq-dns-57d769cc4f-7gzrt\" (UID: \"1783fb29-f6d7-47ae-8320-863d18857042\") " pod="openstack/dnsmasq-dns-57d769cc4f-7gzrt" Feb 18 00:51:09 crc kubenswrapper[4858]: I0218 00:51:09.321000 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1783fb29-f6d7-47ae-8320-863d18857042-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7gzrt\" (UID: \"1783fb29-f6d7-47ae-8320-863d18857042\") " pod="openstack/dnsmasq-dns-57d769cc4f-7gzrt" Feb 18 00:51:09 crc kubenswrapper[4858]: I0218 00:51:09.321082 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7kbz\" (UniqueName: \"kubernetes.io/projected/1783fb29-f6d7-47ae-8320-863d18857042-kube-api-access-q7kbz\") pod \"dnsmasq-dns-57d769cc4f-7gzrt\" (UID: \"1783fb29-f6d7-47ae-8320-863d18857042\") " pod="openstack/dnsmasq-dns-57d769cc4f-7gzrt" Feb 18 00:51:09 crc kubenswrapper[4858]: I0218 00:51:09.426579 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1783fb29-f6d7-47ae-8320-863d18857042-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7gzrt\" (UID: \"1783fb29-f6d7-47ae-8320-863d18857042\") " pod="openstack/dnsmasq-dns-57d769cc4f-7gzrt" Feb 18 00:51:09 crc kubenswrapper[4858]: I0218 00:51:09.426702 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7kbz\" (UniqueName: \"kubernetes.io/projected/1783fb29-f6d7-47ae-8320-863d18857042-kube-api-access-q7kbz\") pod \"dnsmasq-dns-57d769cc4f-7gzrt\" (UID: \"1783fb29-f6d7-47ae-8320-863d18857042\") " pod="openstack/dnsmasq-dns-57d769cc4f-7gzrt" Feb 18 00:51:09 crc kubenswrapper[4858]: I0218 00:51:09.426731 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1783fb29-f6d7-47ae-8320-863d18857042-config\") pod \"dnsmasq-dns-57d769cc4f-7gzrt\" (UID: \"1783fb29-f6d7-47ae-8320-863d18857042\") " pod="openstack/dnsmasq-dns-57d769cc4f-7gzrt" Feb 18 00:51:09 crc kubenswrapper[4858]: I0218 00:51:09.427314 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1783fb29-f6d7-47ae-8320-863d18857042-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7gzrt\" (UID: \"1783fb29-f6d7-47ae-8320-863d18857042\") " pod="openstack/dnsmasq-dns-57d769cc4f-7gzrt" Feb 18 00:51:09 crc kubenswrapper[4858]: I0218 00:51:09.427471 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1783fb29-f6d7-47ae-8320-863d18857042-config\") pod \"dnsmasq-dns-57d769cc4f-7gzrt\" (UID: \"1783fb29-f6d7-47ae-8320-863d18857042\") " pod="openstack/dnsmasq-dns-57d769cc4f-7gzrt" Feb 18 00:51:09 crc kubenswrapper[4858]: I0218 00:51:09.445290 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7kbz\" (UniqueName: \"kubernetes.io/projected/1783fb29-f6d7-47ae-8320-863d18857042-kube-api-access-q7kbz\") pod \"dnsmasq-dns-57d769cc4f-7gzrt\" (UID: \"1783fb29-f6d7-47ae-8320-863d18857042\") " pod="openstack/dnsmasq-dns-57d769cc4f-7gzrt" Feb 18 00:51:09 crc kubenswrapper[4858]: I0218 00:51:09.499271 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7gzrt" Feb 18 00:51:09 crc kubenswrapper[4858]: I0218 00:51:09.673416 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nsq62"] Feb 18 00:51:09 crc kubenswrapper[4858]: I0218 00:51:09.940056 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7gzrt"] Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.015224 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.016674 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.020025 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.020035 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.020309 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.020312 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.020568 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.021643 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-lhgmt" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.024638 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.037299 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.151710 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a53fffdd-3f92-4632-8391-cc89792884a8-config-data\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.151797 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a53fffdd-3f92-4632-8391-cc89792884a8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.151822 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a53fffdd-3f92-4632-8391-cc89792884a8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.151838 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a53fffdd-3f92-4632-8391-cc89792884a8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.151856 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a53fffdd-3f92-4632-8391-cc89792884a8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.151882 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a53fffdd-3f92-4632-8391-cc89792884a8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.151920 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk8gh\" (UniqueName: \"kubernetes.io/projected/a53fffdd-3f92-4632-8391-cc89792884a8-kube-api-access-tk8gh\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.151939 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a53fffdd-3f92-4632-8391-cc89792884a8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.151956 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a53fffdd-3f92-4632-8391-cc89792884a8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.151977 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-182dabb6-acc3-402d-adbb-a1c53881a262\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-182dabb6-acc3-402d-adbb-a1c53881a262\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.152008 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a53fffdd-3f92-4632-8391-cc89792884a8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.253222 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a53fffdd-3f92-4632-8391-cc89792884a8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.253603 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a53fffdd-3f92-4632-8391-cc89792884a8-config-data\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.253656 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a53fffdd-3f92-4632-8391-cc89792884a8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.253675 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a53fffdd-3f92-4632-8391-cc89792884a8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.253689 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a53fffdd-3f92-4632-8391-cc89792884a8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.253706 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a53fffdd-3f92-4632-8391-cc89792884a8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.253729 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a53fffdd-3f92-4632-8391-cc89792884a8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.253767 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk8gh\" (UniqueName: \"kubernetes.io/projected/a53fffdd-3f92-4632-8391-cc89792884a8-kube-api-access-tk8gh\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.253786 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a53fffdd-3f92-4632-8391-cc89792884a8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.253803 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a53fffdd-3f92-4632-8391-cc89792884a8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.253822 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-182dabb6-acc3-402d-adbb-a1c53881a262\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-182dabb6-acc3-402d-adbb-a1c53881a262\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.254059 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a53fffdd-3f92-4632-8391-cc89792884a8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.254306 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a53fffdd-3f92-4632-8391-cc89792884a8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.255156 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a53fffdd-3f92-4632-8391-cc89792884a8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.255417 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a53fffdd-3f92-4632-8391-cc89792884a8-config-data\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.255820 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a53fffdd-3f92-4632-8391-cc89792884a8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.257875 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a53fffdd-3f92-4632-8391-cc89792884a8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.259792 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a53fffdd-3f92-4632-8391-cc89792884a8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.259842 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.259886 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-182dabb6-acc3-402d-adbb-a1c53881a262\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-182dabb6-acc3-402d-adbb-a1c53881a262\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a7b9bb42ea0921459bd8f9dee1d37c625c88818f9ff056e9cdb682621212c886/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.273515 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a53fffdd-3f92-4632-8391-cc89792884a8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.273577 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk8gh\" (UniqueName: \"kubernetes.io/projected/a53fffdd-3f92-4632-8391-cc89792884a8-kube-api-access-tk8gh\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.293917 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a53fffdd-3f92-4632-8391-cc89792884a8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.296528 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.297838 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-182dabb6-acc3-402d-adbb-a1c53881a262\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-182dabb6-acc3-402d-adbb-a1c53881a262\") pod \"rabbitmq-server-0\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.301289 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.306590 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.306751 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.306877 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-5pwvb" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.307085 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.307234 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.307887 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.308004 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.313675 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.337557 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.455783 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/69e5abda-5efa-402f-b66c-320cf6ed1d99-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.455832 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a73777e5-acc5-4ded-8176-ec13d160539f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a73777e5-acc5-4ded-8176-ec13d160539f\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.455961 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/69e5abda-5efa-402f-b66c-320cf6ed1d99-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.455999 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25xpr\" (UniqueName: \"kubernetes.io/projected/69e5abda-5efa-402f-b66c-320cf6ed1d99-kube-api-access-25xpr\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.456080 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/69e5abda-5efa-402f-b66c-320cf6ed1d99-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.456113 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/69e5abda-5efa-402f-b66c-320cf6ed1d99-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.456134 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/69e5abda-5efa-402f-b66c-320cf6ed1d99-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.456210 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/69e5abda-5efa-402f-b66c-320cf6ed1d99-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.456236 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/69e5abda-5efa-402f-b66c-320cf6ed1d99-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.456277 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/69e5abda-5efa-402f-b66c-320cf6ed1d99-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.456305 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/69e5abda-5efa-402f-b66c-320cf6ed1d99-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.558917 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/69e5abda-5efa-402f-b66c-320cf6ed1d99-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.559327 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/69e5abda-5efa-402f-b66c-320cf6ed1d99-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.561759 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a73777e5-acc5-4ded-8176-ec13d160539f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a73777e5-acc5-4ded-8176-ec13d160539f\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.562015 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/69e5abda-5efa-402f-b66c-320cf6ed1d99-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.562077 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25xpr\" (UniqueName: \"kubernetes.io/projected/69e5abda-5efa-402f-b66c-320cf6ed1d99-kube-api-access-25xpr\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.562157 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/69e5abda-5efa-402f-b66c-320cf6ed1d99-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.562195 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/69e5abda-5efa-402f-b66c-320cf6ed1d99-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.562215 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/69e5abda-5efa-402f-b66c-320cf6ed1d99-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.562273 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/69e5abda-5efa-402f-b66c-320cf6ed1d99-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.562316 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/69e5abda-5efa-402f-b66c-320cf6ed1d99-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.562448 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/69e5abda-5efa-402f-b66c-320cf6ed1d99-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.563930 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/69e5abda-5efa-402f-b66c-320cf6ed1d99-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.564392 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/69e5abda-5efa-402f-b66c-320cf6ed1d99-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.564978 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/69e5abda-5efa-402f-b66c-320cf6ed1d99-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.566788 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/69e5abda-5efa-402f-b66c-320cf6ed1d99-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.568029 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/69e5abda-5efa-402f-b66c-320cf6ed1d99-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.569182 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/69e5abda-5efa-402f-b66c-320cf6ed1d99-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.569452 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/69e5abda-5efa-402f-b66c-320cf6ed1d99-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.573467 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/69e5abda-5efa-402f-b66c-320cf6ed1d99-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.578995 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.579043 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a73777e5-acc5-4ded-8176-ec13d160539f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a73777e5-acc5-4ded-8176-ec13d160539f\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8b6638bc3b4ec62d9a769affd0180f73c2510662f769e962e86871af5bab5490/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.583895 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25xpr\" (UniqueName: \"kubernetes.io/projected/69e5abda-5efa-402f-b66c-320cf6ed1d99-kube-api-access-25xpr\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.593539 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/69e5abda-5efa-402f-b66c-320cf6ed1d99-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.618368 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a73777e5-acc5-4ded-8176-ec13d160539f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a73777e5-acc5-4ded-8176-ec13d160539f\") pod \"rabbitmq-cell1-server-0\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.630696 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-nsq62" event={"ID":"88738498-130e-438a-a822-f9946add222c","Type":"ContainerStarted","Data":"555f15e387b22570a3f1894e47cf3eab5f091dd1b409de299e5117d98a44975d"} Feb 18 00:51:10 crc kubenswrapper[4858]: I0218 00:51:10.634912 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.629113 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.630738 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.635258 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.635937 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.636035 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-78lxd" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.636206 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.642916 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.646830 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.782221 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a845f908-18e9-47e2-bc4f-01308c8a69b3-config-data-default\") pod \"openstack-galera-0\" (UID: \"a845f908-18e9-47e2-bc4f-01308c8a69b3\") " pod="openstack/openstack-galera-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.782387 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a845f908-18e9-47e2-bc4f-01308c8a69b3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a845f908-18e9-47e2-bc4f-01308c8a69b3\") " pod="openstack/openstack-galera-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.782441 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-77dabb10-dc08-4246-803b-b9369ddeba81\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77dabb10-dc08-4246-803b-b9369ddeba81\") pod \"openstack-galera-0\" (UID: \"a845f908-18e9-47e2-bc4f-01308c8a69b3\") " pod="openstack/openstack-galera-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.782629 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a845f908-18e9-47e2-bc4f-01308c8a69b3-kolla-config\") pod \"openstack-galera-0\" (UID: \"a845f908-18e9-47e2-bc4f-01308c8a69b3\") " pod="openstack/openstack-galera-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.782766 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a845f908-18e9-47e2-bc4f-01308c8a69b3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a845f908-18e9-47e2-bc4f-01308c8a69b3\") " pod="openstack/openstack-galera-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.782949 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a845f908-18e9-47e2-bc4f-01308c8a69b3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a845f908-18e9-47e2-bc4f-01308c8a69b3\") " pod="openstack/openstack-galera-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.783079 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a845f908-18e9-47e2-bc4f-01308c8a69b3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a845f908-18e9-47e2-bc4f-01308c8a69b3\") " pod="openstack/openstack-galera-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.783139 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl5vr\" (UniqueName: \"kubernetes.io/projected/a845f908-18e9-47e2-bc4f-01308c8a69b3-kube-api-access-gl5vr\") pod \"openstack-galera-0\" (UID: \"a845f908-18e9-47e2-bc4f-01308c8a69b3\") " pod="openstack/openstack-galera-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.885258 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a845f908-18e9-47e2-bc4f-01308c8a69b3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a845f908-18e9-47e2-bc4f-01308c8a69b3\") " pod="openstack/openstack-galera-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.885356 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl5vr\" (UniqueName: \"kubernetes.io/projected/a845f908-18e9-47e2-bc4f-01308c8a69b3-kube-api-access-gl5vr\") pod \"openstack-galera-0\" (UID: \"a845f908-18e9-47e2-bc4f-01308c8a69b3\") " pod="openstack/openstack-galera-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.885442 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a845f908-18e9-47e2-bc4f-01308c8a69b3-config-data-default\") pod \"openstack-galera-0\" (UID: \"a845f908-18e9-47e2-bc4f-01308c8a69b3\") " pod="openstack/openstack-galera-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.885523 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a845f908-18e9-47e2-bc4f-01308c8a69b3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a845f908-18e9-47e2-bc4f-01308c8a69b3\") " pod="openstack/openstack-galera-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.885577 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-77dabb10-dc08-4246-803b-b9369ddeba81\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77dabb10-dc08-4246-803b-b9369ddeba81\") pod \"openstack-galera-0\" (UID: \"a845f908-18e9-47e2-bc4f-01308c8a69b3\") " pod="openstack/openstack-galera-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.885673 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a845f908-18e9-47e2-bc4f-01308c8a69b3-kolla-config\") pod \"openstack-galera-0\" (UID: \"a845f908-18e9-47e2-bc4f-01308c8a69b3\") " pod="openstack/openstack-galera-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.885721 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a845f908-18e9-47e2-bc4f-01308c8a69b3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a845f908-18e9-47e2-bc4f-01308c8a69b3\") " pod="openstack/openstack-galera-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.885777 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a845f908-18e9-47e2-bc4f-01308c8a69b3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a845f908-18e9-47e2-bc4f-01308c8a69b3\") " pod="openstack/openstack-galera-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.886591 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a845f908-18e9-47e2-bc4f-01308c8a69b3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a845f908-18e9-47e2-bc4f-01308c8a69b3\") " pod="openstack/openstack-galera-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.888472 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a845f908-18e9-47e2-bc4f-01308c8a69b3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a845f908-18e9-47e2-bc4f-01308c8a69b3\") " pod="openstack/openstack-galera-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.889000 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a845f908-18e9-47e2-bc4f-01308c8a69b3-kolla-config\") pod \"openstack-galera-0\" (UID: \"a845f908-18e9-47e2-bc4f-01308c8a69b3\") " pod="openstack/openstack-galera-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.889678 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.889746 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a845f908-18e9-47e2-bc4f-01308c8a69b3-config-data-default\") pod \"openstack-galera-0\" (UID: \"a845f908-18e9-47e2-bc4f-01308c8a69b3\") " pod="openstack/openstack-galera-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.889745 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-77dabb10-dc08-4246-803b-b9369ddeba81\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77dabb10-dc08-4246-803b-b9369ddeba81\") pod \"openstack-galera-0\" (UID: \"a845f908-18e9-47e2-bc4f-01308c8a69b3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6e60650f655fae74db9380b5cf65b0638c0d06f93f3f9ce758ebd94f0238c98f/globalmount\"" pod="openstack/openstack-galera-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.893986 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a845f908-18e9-47e2-bc4f-01308c8a69b3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a845f908-18e9-47e2-bc4f-01308c8a69b3\") " pod="openstack/openstack-galera-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.904263 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a845f908-18e9-47e2-bc4f-01308c8a69b3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a845f908-18e9-47e2-bc4f-01308c8a69b3\") " pod="openstack/openstack-galera-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.905134 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl5vr\" (UniqueName: \"kubernetes.io/projected/a845f908-18e9-47e2-bc4f-01308c8a69b3-kube-api-access-gl5vr\") pod \"openstack-galera-0\" (UID: \"a845f908-18e9-47e2-bc4f-01308c8a69b3\") " pod="openstack/openstack-galera-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.941027 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-77dabb10-dc08-4246-803b-b9369ddeba81\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77dabb10-dc08-4246-803b-b9369ddeba81\") pod \"openstack-galera-0\" (UID: \"a845f908-18e9-47e2-bc4f-01308c8a69b3\") " pod="openstack/openstack-galera-0" Feb 18 00:51:11 crc kubenswrapper[4858]: I0218 00:51:11.965361 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.122226 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.125639 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.137082 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-rvfnp" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.137419 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.138775 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.138948 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.139289 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.309006 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/acb8b920-9bb7-42b7-8bf7-e8f6b5880654-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"acb8b920-9bb7-42b7-8bf7-e8f6b5880654\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.309074 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb8b920-9bb7-42b7-8bf7-e8f6b5880654-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"acb8b920-9bb7-42b7-8bf7-e8f6b5880654\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.309099 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/acb8b920-9bb7-42b7-8bf7-e8f6b5880654-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"acb8b920-9bb7-42b7-8bf7-e8f6b5880654\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.309288 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg4f5\" (UniqueName: \"kubernetes.io/projected/acb8b920-9bb7-42b7-8bf7-e8f6b5880654-kube-api-access-pg4f5\") pod \"openstack-cell1-galera-0\" (UID: \"acb8b920-9bb7-42b7-8bf7-e8f6b5880654\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.309341 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/acb8b920-9bb7-42b7-8bf7-e8f6b5880654-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"acb8b920-9bb7-42b7-8bf7-e8f6b5880654\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.309451 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e140d6f7-18f1-4abf-8adb-a842e9a12d7a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e140d6f7-18f1-4abf-8adb-a842e9a12d7a\") pod \"openstack-cell1-galera-0\" (UID: \"acb8b920-9bb7-42b7-8bf7-e8f6b5880654\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.309565 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/acb8b920-9bb7-42b7-8bf7-e8f6b5880654-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"acb8b920-9bb7-42b7-8bf7-e8f6b5880654\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.309621 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acb8b920-9bb7-42b7-8bf7-e8f6b5880654-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"acb8b920-9bb7-42b7-8bf7-e8f6b5880654\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: W0218 00:51:13.358831 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1783fb29_f6d7_47ae_8320_863d18857042.slice/crio-40bd4c6882e94770f52466231ab98d54d82c283490fb05bea87a5c56be7ba8bd WatchSource:0}: Error finding container 40bd4c6882e94770f52466231ab98d54d82c283490fb05bea87a5c56be7ba8bd: Status 404 returned error can't find the container with id 40bd4c6882e94770f52466231ab98d54d82c283490fb05bea87a5c56be7ba8bd Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.414530 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg4f5\" (UniqueName: \"kubernetes.io/projected/acb8b920-9bb7-42b7-8bf7-e8f6b5880654-kube-api-access-pg4f5\") pod \"openstack-cell1-galera-0\" (UID: \"acb8b920-9bb7-42b7-8bf7-e8f6b5880654\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.414601 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/acb8b920-9bb7-42b7-8bf7-e8f6b5880654-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"acb8b920-9bb7-42b7-8bf7-e8f6b5880654\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.414664 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e140d6f7-18f1-4abf-8adb-a842e9a12d7a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e140d6f7-18f1-4abf-8adb-a842e9a12d7a\") pod \"openstack-cell1-galera-0\" (UID: \"acb8b920-9bb7-42b7-8bf7-e8f6b5880654\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.414709 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/acb8b920-9bb7-42b7-8bf7-e8f6b5880654-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"acb8b920-9bb7-42b7-8bf7-e8f6b5880654\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.414750 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acb8b920-9bb7-42b7-8bf7-e8f6b5880654-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"acb8b920-9bb7-42b7-8bf7-e8f6b5880654\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.414787 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/acb8b920-9bb7-42b7-8bf7-e8f6b5880654-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"acb8b920-9bb7-42b7-8bf7-e8f6b5880654\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.414837 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb8b920-9bb7-42b7-8bf7-e8f6b5880654-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"acb8b920-9bb7-42b7-8bf7-e8f6b5880654\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.414884 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/acb8b920-9bb7-42b7-8bf7-e8f6b5880654-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"acb8b920-9bb7-42b7-8bf7-e8f6b5880654\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.416233 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/acb8b920-9bb7-42b7-8bf7-e8f6b5880654-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"acb8b920-9bb7-42b7-8bf7-e8f6b5880654\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.416757 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/acb8b920-9bb7-42b7-8bf7-e8f6b5880654-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"acb8b920-9bb7-42b7-8bf7-e8f6b5880654\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.418391 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/acb8b920-9bb7-42b7-8bf7-e8f6b5880654-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"acb8b920-9bb7-42b7-8bf7-e8f6b5880654\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.426760 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb8b920-9bb7-42b7-8bf7-e8f6b5880654-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"acb8b920-9bb7-42b7-8bf7-e8f6b5880654\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.434694 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.434740 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e140d6f7-18f1-4abf-8adb-a842e9a12d7a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e140d6f7-18f1-4abf-8adb-a842e9a12d7a\") pod \"openstack-cell1-galera-0\" (UID: \"acb8b920-9bb7-42b7-8bf7-e8f6b5880654\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/495c5180fd842a16901315769e9006ec7a2f6318eaf1c1d4e8ed0ec72e8fda9b/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.446285 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acb8b920-9bb7-42b7-8bf7-e8f6b5880654-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"acb8b920-9bb7-42b7-8bf7-e8f6b5880654\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.450237 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/acb8b920-9bb7-42b7-8bf7-e8f6b5880654-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"acb8b920-9bb7-42b7-8bf7-e8f6b5880654\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.463722 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg4f5\" (UniqueName: \"kubernetes.io/projected/acb8b920-9bb7-42b7-8bf7-e8f6b5880654-kube-api-access-pg4f5\") pod \"openstack-cell1-galera-0\" (UID: \"acb8b920-9bb7-42b7-8bf7-e8f6b5880654\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.617203 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e140d6f7-18f1-4abf-8adb-a842e9a12d7a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e140d6f7-18f1-4abf-8adb-a842e9a12d7a\") pod \"openstack-cell1-galera-0\" (UID: \"acb8b920-9bb7-42b7-8bf7-e8f6b5880654\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.679287 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7gzrt" event={"ID":"1783fb29-f6d7-47ae-8320-863d18857042","Type":"ContainerStarted","Data":"40bd4c6882e94770f52466231ab98d54d82c283490fb05bea87a5c56be7ba8bd"} Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.702478 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.703847 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.705235 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.705836 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-vwwkj" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.705950 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.717712 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.766397 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.821279 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31807c8a-5224-4df1-a761-10031d623fa5-config-data\") pod \"memcached-0\" (UID: \"31807c8a-5224-4df1-a761-10031d623fa5\") " pod="openstack/memcached-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.821400 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/31807c8a-5224-4df1-a761-10031d623fa5-kolla-config\") pod \"memcached-0\" (UID: \"31807c8a-5224-4df1-a761-10031d623fa5\") " pod="openstack/memcached-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.821445 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwgb6\" (UniqueName: \"kubernetes.io/projected/31807c8a-5224-4df1-a761-10031d623fa5-kube-api-access-wwgb6\") pod \"memcached-0\" (UID: \"31807c8a-5224-4df1-a761-10031d623fa5\") " pod="openstack/memcached-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.821542 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31807c8a-5224-4df1-a761-10031d623fa5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"31807c8a-5224-4df1-a761-10031d623fa5\") " pod="openstack/memcached-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.821639 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/31807c8a-5224-4df1-a761-10031d623fa5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"31807c8a-5224-4df1-a761-10031d623fa5\") " pod="openstack/memcached-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.923811 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31807c8a-5224-4df1-a761-10031d623fa5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"31807c8a-5224-4df1-a761-10031d623fa5\") " pod="openstack/memcached-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.924085 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/31807c8a-5224-4df1-a761-10031d623fa5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"31807c8a-5224-4df1-a761-10031d623fa5\") " pod="openstack/memcached-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.924251 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31807c8a-5224-4df1-a761-10031d623fa5-config-data\") pod \"memcached-0\" (UID: \"31807c8a-5224-4df1-a761-10031d623fa5\") " pod="openstack/memcached-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.924435 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/31807c8a-5224-4df1-a761-10031d623fa5-kolla-config\") pod \"memcached-0\" (UID: \"31807c8a-5224-4df1-a761-10031d623fa5\") " pod="openstack/memcached-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.924481 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwgb6\" (UniqueName: \"kubernetes.io/projected/31807c8a-5224-4df1-a761-10031d623fa5-kube-api-access-wwgb6\") pod \"memcached-0\" (UID: \"31807c8a-5224-4df1-a761-10031d623fa5\") " pod="openstack/memcached-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.925299 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/31807c8a-5224-4df1-a761-10031d623fa5-kolla-config\") pod \"memcached-0\" (UID: \"31807c8a-5224-4df1-a761-10031d623fa5\") " pod="openstack/memcached-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.925478 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31807c8a-5224-4df1-a761-10031d623fa5-config-data\") pod \"memcached-0\" (UID: \"31807c8a-5224-4df1-a761-10031d623fa5\") " pod="openstack/memcached-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.929862 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31807c8a-5224-4df1-a761-10031d623fa5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"31807c8a-5224-4df1-a761-10031d623fa5\") " pod="openstack/memcached-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.929932 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/31807c8a-5224-4df1-a761-10031d623fa5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"31807c8a-5224-4df1-a761-10031d623fa5\") " pod="openstack/memcached-0" Feb 18 00:51:13 crc kubenswrapper[4858]: I0218 00:51:13.951262 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwgb6\" (UniqueName: \"kubernetes.io/projected/31807c8a-5224-4df1-a761-10031d623fa5-kube-api-access-wwgb6\") pod \"memcached-0\" (UID: \"31807c8a-5224-4df1-a761-10031d623fa5\") " pod="openstack/memcached-0" Feb 18 00:51:14 crc kubenswrapper[4858]: I0218 00:51:14.021870 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 00:51:15 crc kubenswrapper[4858]: I0218 00:51:15.971802 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 00:51:15 crc kubenswrapper[4858]: I0218 00:51:15.973017 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 00:51:15 crc kubenswrapper[4858]: I0218 00:51:15.975942 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-4nvsc" Feb 18 00:51:15 crc kubenswrapper[4858]: I0218 00:51:15.983413 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.067951 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jsbl\" (UniqueName: \"kubernetes.io/projected/9788397b-0bb7-43f9-9ac8-69b765750ecb-kube-api-access-9jsbl\") pod \"kube-state-metrics-0\" (UID: \"9788397b-0bb7-43f9-9ac8-69b765750ecb\") " pod="openstack/kube-state-metrics-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.168894 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jsbl\" (UniqueName: \"kubernetes.io/projected/9788397b-0bb7-43f9-9ac8-69b765750ecb-kube-api-access-9jsbl\") pod \"kube-state-metrics-0\" (UID: \"9788397b-0bb7-43f9-9ac8-69b765750ecb\") " pod="openstack/kube-state-metrics-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.207523 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jsbl\" (UniqueName: \"kubernetes.io/projected/9788397b-0bb7-43f9-9ac8-69b765750ecb-kube-api-access-9jsbl\") pod \"kube-state-metrics-0\" (UID: \"9788397b-0bb7-43f9-9ac8-69b765750ecb\") " pod="openstack/kube-state-metrics-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.292030 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.606786 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.609034 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.610823 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.611016 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-g4ps9" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.611142 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.619282 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.619353 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.630463 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.778429 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcjkd\" (UniqueName: \"kubernetes.io/projected/22183a64-a68c-47af-8352-b04603981c9d-kube-api-access-rcjkd\") pod \"alertmanager-metric-storage-0\" (UID: \"22183a64-a68c-47af-8352-b04603981c9d\") " pod="openstack/alertmanager-metric-storage-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.778511 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/22183a64-a68c-47af-8352-b04603981c9d-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"22183a64-a68c-47af-8352-b04603981c9d\") " pod="openstack/alertmanager-metric-storage-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.778551 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/22183a64-a68c-47af-8352-b04603981c9d-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"22183a64-a68c-47af-8352-b04603981c9d\") " pod="openstack/alertmanager-metric-storage-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.778584 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/22183a64-a68c-47af-8352-b04603981c9d-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"22183a64-a68c-47af-8352-b04603981c9d\") " pod="openstack/alertmanager-metric-storage-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.778671 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/22183a64-a68c-47af-8352-b04603981c9d-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"22183a64-a68c-47af-8352-b04603981c9d\") " pod="openstack/alertmanager-metric-storage-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.778689 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/22183a64-a68c-47af-8352-b04603981c9d-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"22183a64-a68c-47af-8352-b04603981c9d\") " pod="openstack/alertmanager-metric-storage-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.778715 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/22183a64-a68c-47af-8352-b04603981c9d-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"22183a64-a68c-47af-8352-b04603981c9d\") " pod="openstack/alertmanager-metric-storage-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.880357 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcjkd\" (UniqueName: \"kubernetes.io/projected/22183a64-a68c-47af-8352-b04603981c9d-kube-api-access-rcjkd\") pod \"alertmanager-metric-storage-0\" (UID: \"22183a64-a68c-47af-8352-b04603981c9d\") " pod="openstack/alertmanager-metric-storage-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.880409 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/22183a64-a68c-47af-8352-b04603981c9d-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"22183a64-a68c-47af-8352-b04603981c9d\") " pod="openstack/alertmanager-metric-storage-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.880433 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/22183a64-a68c-47af-8352-b04603981c9d-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"22183a64-a68c-47af-8352-b04603981c9d\") " pod="openstack/alertmanager-metric-storage-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.880459 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/22183a64-a68c-47af-8352-b04603981c9d-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"22183a64-a68c-47af-8352-b04603981c9d\") " pod="openstack/alertmanager-metric-storage-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.880563 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/22183a64-a68c-47af-8352-b04603981c9d-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"22183a64-a68c-47af-8352-b04603981c9d\") " pod="openstack/alertmanager-metric-storage-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.880582 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/22183a64-a68c-47af-8352-b04603981c9d-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"22183a64-a68c-47af-8352-b04603981c9d\") " pod="openstack/alertmanager-metric-storage-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.880608 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/22183a64-a68c-47af-8352-b04603981c9d-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"22183a64-a68c-47af-8352-b04603981c9d\") " pod="openstack/alertmanager-metric-storage-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.881082 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/22183a64-a68c-47af-8352-b04603981c9d-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"22183a64-a68c-47af-8352-b04603981c9d\") " pod="openstack/alertmanager-metric-storage-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.883872 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/22183a64-a68c-47af-8352-b04603981c9d-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"22183a64-a68c-47af-8352-b04603981c9d\") " pod="openstack/alertmanager-metric-storage-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.885189 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/22183a64-a68c-47af-8352-b04603981c9d-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"22183a64-a68c-47af-8352-b04603981c9d\") " pod="openstack/alertmanager-metric-storage-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.886785 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/22183a64-a68c-47af-8352-b04603981c9d-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"22183a64-a68c-47af-8352-b04603981c9d\") " pod="openstack/alertmanager-metric-storage-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.887189 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/22183a64-a68c-47af-8352-b04603981c9d-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"22183a64-a68c-47af-8352-b04603981c9d\") " pod="openstack/alertmanager-metric-storage-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.887405 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/22183a64-a68c-47af-8352-b04603981c9d-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"22183a64-a68c-47af-8352-b04603981c9d\") " pod="openstack/alertmanager-metric-storage-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.906223 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcjkd\" (UniqueName: \"kubernetes.io/projected/22183a64-a68c-47af-8352-b04603981c9d-kube-api-access-rcjkd\") pod \"alertmanager-metric-storage-0\" (UID: \"22183a64-a68c-47af-8352-b04603981c9d\") " pod="openstack/alertmanager-metric-storage-0" Feb 18 00:51:16 crc kubenswrapper[4858]: I0218 00:51:16.927890 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.202258 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.205341 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.207374 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.207802 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.207907 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.208029 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.207984 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.208246 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.208480 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.208600 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-txzkj" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.223075 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.387771 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/675206cd-1619-4598-81a9-96b66d09ce88-config\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.388310 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/675206cd-1619-4598-81a9-96b66d09ce88-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.388524 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/675206cd-1619-4598-81a9-96b66d09ce88-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.388787 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzdxn\" (UniqueName: \"kubernetes.io/projected/675206cd-1619-4598-81a9-96b66d09ce88-kube-api-access-kzdxn\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.389011 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/675206cd-1619-4598-81a9-96b66d09ce88-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.389205 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7f31e845-073a-4f8c-8018-2bfd1403618b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f31e845-073a-4f8c-8018-2bfd1403618b\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.389468 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/675206cd-1619-4598-81a9-96b66d09ce88-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.389663 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/675206cd-1619-4598-81a9-96b66d09ce88-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.389868 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/675206cd-1619-4598-81a9-96b66d09ce88-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.390089 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/675206cd-1619-4598-81a9-96b66d09ce88-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.492113 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/675206cd-1619-4598-81a9-96b66d09ce88-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.492204 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/675206cd-1619-4598-81a9-96b66d09ce88-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.492245 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/675206cd-1619-4598-81a9-96b66d09ce88-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.492322 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/675206cd-1619-4598-81a9-96b66d09ce88-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.492373 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/675206cd-1619-4598-81a9-96b66d09ce88-config\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.492409 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/675206cd-1619-4598-81a9-96b66d09ce88-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.492457 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/675206cd-1619-4598-81a9-96b66d09ce88-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.492587 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzdxn\" (UniqueName: \"kubernetes.io/projected/675206cd-1619-4598-81a9-96b66d09ce88-kube-api-access-kzdxn\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.492637 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/675206cd-1619-4598-81a9-96b66d09ce88-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.492687 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7f31e845-073a-4f8c-8018-2bfd1403618b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f31e845-073a-4f8c-8018-2bfd1403618b\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.494739 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/675206cd-1619-4598-81a9-96b66d09ce88-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.494925 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/675206cd-1619-4598-81a9-96b66d09ce88-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.495433 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/675206cd-1619-4598-81a9-96b66d09ce88-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.496403 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/675206cd-1619-4598-81a9-96b66d09ce88-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.499719 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/675206cd-1619-4598-81a9-96b66d09ce88-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.500994 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/675206cd-1619-4598-81a9-96b66d09ce88-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.502350 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.502385 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7f31e845-073a-4f8c-8018-2bfd1403618b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f31e845-073a-4f8c-8018-2bfd1403618b\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/44e27a725395d9ee006c04409605ca05c99678e3b59bf9a205b87c710aedbc27/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.504630 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/675206cd-1619-4598-81a9-96b66d09ce88-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.509703 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/675206cd-1619-4598-81a9-96b66d09ce88-config\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.526632 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzdxn\" (UniqueName: \"kubernetes.io/projected/675206cd-1619-4598-81a9-96b66d09ce88-kube-api-access-kzdxn\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.564675 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7f31e845-073a-4f8c-8018-2bfd1403618b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f31e845-073a-4f8c-8018-2bfd1403618b\") pod \"prometheus-metric-storage-0\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:17 crc kubenswrapper[4858]: I0218 00:51:17.831212 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 00:51:19 crc kubenswrapper[4858]: I0218 00:51:19.982901 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 00:51:19 crc kubenswrapper[4858]: I0218 00:51:19.984714 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:19 crc kubenswrapper[4858]: I0218 00:51:19.987532 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 18 00:51:19 crc kubenswrapper[4858]: I0218 00:51:19.988775 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 18 00:51:19 crc kubenswrapper[4858]: I0218 00:51:19.988942 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 18 00:51:19 crc kubenswrapper[4858]: I0218 00:51:19.989106 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 18 00:51:19 crc kubenswrapper[4858]: I0218 00:51:19.989244 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-629wk" Feb 18 00:51:19 crc kubenswrapper[4858]: I0218 00:51:19.999565 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.137526 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsg4z\" (UniqueName: \"kubernetes.io/projected/7eb932c6-138e-44fc-b382-6e702ea9d39b-kube-api-access-nsg4z\") pod \"ovsdbserver-nb-0\" (UID: \"7eb932c6-138e-44fc-b382-6e702ea9d39b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.137596 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eb932c6-138e-44fc-b382-6e702ea9d39b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7eb932c6-138e-44fc-b382-6e702ea9d39b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.137628 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7eb932c6-138e-44fc-b382-6e702ea9d39b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7eb932c6-138e-44fc-b382-6e702ea9d39b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.137650 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eb932c6-138e-44fc-b382-6e702ea9d39b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7eb932c6-138e-44fc-b382-6e702ea9d39b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.137697 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb932c6-138e-44fc-b382-6e702ea9d39b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7eb932c6-138e-44fc-b382-6e702ea9d39b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.137770 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f4f1bdc7-e92f-43aa-b06e-a208b5bd0d77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4f1bdc7-e92f-43aa-b06e-a208b5bd0d77\") pod \"ovsdbserver-nb-0\" (UID: \"7eb932c6-138e-44fc-b382-6e702ea9d39b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.137804 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7eb932c6-138e-44fc-b382-6e702ea9d39b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7eb932c6-138e-44fc-b382-6e702ea9d39b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.137844 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eb932c6-138e-44fc-b382-6e702ea9d39b-config\") pod \"ovsdbserver-nb-0\" (UID: \"7eb932c6-138e-44fc-b382-6e702ea9d39b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.239664 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb932c6-138e-44fc-b382-6e702ea9d39b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7eb932c6-138e-44fc-b382-6e702ea9d39b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.239746 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f4f1bdc7-e92f-43aa-b06e-a208b5bd0d77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4f1bdc7-e92f-43aa-b06e-a208b5bd0d77\") pod \"ovsdbserver-nb-0\" (UID: \"7eb932c6-138e-44fc-b382-6e702ea9d39b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.239776 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7eb932c6-138e-44fc-b382-6e702ea9d39b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7eb932c6-138e-44fc-b382-6e702ea9d39b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.239807 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eb932c6-138e-44fc-b382-6e702ea9d39b-config\") pod \"ovsdbserver-nb-0\" (UID: \"7eb932c6-138e-44fc-b382-6e702ea9d39b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.239858 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsg4z\" (UniqueName: \"kubernetes.io/projected/7eb932c6-138e-44fc-b382-6e702ea9d39b-kube-api-access-nsg4z\") pod \"ovsdbserver-nb-0\" (UID: \"7eb932c6-138e-44fc-b382-6e702ea9d39b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.239884 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eb932c6-138e-44fc-b382-6e702ea9d39b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7eb932c6-138e-44fc-b382-6e702ea9d39b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.239902 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7eb932c6-138e-44fc-b382-6e702ea9d39b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7eb932c6-138e-44fc-b382-6e702ea9d39b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.239916 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eb932c6-138e-44fc-b382-6e702ea9d39b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7eb932c6-138e-44fc-b382-6e702ea9d39b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.241519 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eb932c6-138e-44fc-b382-6e702ea9d39b-config\") pod \"ovsdbserver-nb-0\" (UID: \"7eb932c6-138e-44fc-b382-6e702ea9d39b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.243110 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7eb932c6-138e-44fc-b382-6e702ea9d39b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7eb932c6-138e-44fc-b382-6e702ea9d39b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.244546 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7eb932c6-138e-44fc-b382-6e702ea9d39b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7eb932c6-138e-44fc-b382-6e702ea9d39b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.245105 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.245142 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f4f1bdc7-e92f-43aa-b06e-a208b5bd0d77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4f1bdc7-e92f-43aa-b06e-a208b5bd0d77\") pod \"ovsdbserver-nb-0\" (UID: \"7eb932c6-138e-44fc-b382-6e702ea9d39b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f02257f2d99d28538993f102c0676b664d414cb1350df4e8037d818cc29417d9/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.246487 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eb932c6-138e-44fc-b382-6e702ea9d39b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7eb932c6-138e-44fc-b382-6e702ea9d39b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.252153 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb932c6-138e-44fc-b382-6e702ea9d39b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7eb932c6-138e-44fc-b382-6e702ea9d39b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.253937 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eb932c6-138e-44fc-b382-6e702ea9d39b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7eb932c6-138e-44fc-b382-6e702ea9d39b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.267987 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsg4z\" (UniqueName: \"kubernetes.io/projected/7eb932c6-138e-44fc-b382-6e702ea9d39b-kube-api-access-nsg4z\") pod \"ovsdbserver-nb-0\" (UID: \"7eb932c6-138e-44fc-b382-6e702ea9d39b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.279584 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f4f1bdc7-e92f-43aa-b06e-a208b5bd0d77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4f1bdc7-e92f-43aa-b06e-a208b5bd0d77\") pod \"ovsdbserver-nb-0\" (UID: \"7eb932c6-138e-44fc-b382-6e702ea9d39b\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.307479 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.656204 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-fvnsh"] Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.657335 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fvnsh" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.660278 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.660778 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-fxxp5" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.666677 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.672872 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-qn9qf"] Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.674405 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qn9qf" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.682434 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fvnsh"] Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.704831 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-qn9qf"] Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.749147 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19953a4a-b2c2-42f5-a48b-a217cf7b7ab0-scripts\") pod \"ovn-controller-fvnsh\" (UID: \"19953a4a-b2c2-42f5-a48b-a217cf7b7ab0\") " pod="openstack/ovn-controller-fvnsh" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.749239 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47p2x\" (UniqueName: \"kubernetes.io/projected/19953a4a-b2c2-42f5-a48b-a217cf7b7ab0-kube-api-access-47p2x\") pod \"ovn-controller-fvnsh\" (UID: \"19953a4a-b2c2-42f5-a48b-a217cf7b7ab0\") " pod="openstack/ovn-controller-fvnsh" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.749267 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/19953a4a-b2c2-42f5-a48b-a217cf7b7ab0-ovn-controller-tls-certs\") pod \"ovn-controller-fvnsh\" (UID: \"19953a4a-b2c2-42f5-a48b-a217cf7b7ab0\") " pod="openstack/ovn-controller-fvnsh" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.749302 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19953a4a-b2c2-42f5-a48b-a217cf7b7ab0-var-run-ovn\") pod \"ovn-controller-fvnsh\" (UID: \"19953a4a-b2c2-42f5-a48b-a217cf7b7ab0\") " pod="openstack/ovn-controller-fvnsh" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.749330 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19953a4a-b2c2-42f5-a48b-a217cf7b7ab0-var-log-ovn\") pod \"ovn-controller-fvnsh\" (UID: \"19953a4a-b2c2-42f5-a48b-a217cf7b7ab0\") " pod="openstack/ovn-controller-fvnsh" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.749382 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19953a4a-b2c2-42f5-a48b-a217cf7b7ab0-var-run\") pod \"ovn-controller-fvnsh\" (UID: \"19953a4a-b2c2-42f5-a48b-a217cf7b7ab0\") " pod="openstack/ovn-controller-fvnsh" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.749415 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19953a4a-b2c2-42f5-a48b-a217cf7b7ab0-combined-ca-bundle\") pod \"ovn-controller-fvnsh\" (UID: \"19953a4a-b2c2-42f5-a48b-a217cf7b7ab0\") " pod="openstack/ovn-controller-fvnsh" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.850443 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19953a4a-b2c2-42f5-a48b-a217cf7b7ab0-scripts\") pod \"ovn-controller-fvnsh\" (UID: \"19953a4a-b2c2-42f5-a48b-a217cf7b7ab0\") " pod="openstack/ovn-controller-fvnsh" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.850516 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/131eb8ce-e6be-487f-b698-370140a1a338-etc-ovs\") pod \"ovn-controller-ovs-qn9qf\" (UID: \"131eb8ce-e6be-487f-b698-370140a1a338\") " pod="openstack/ovn-controller-ovs-qn9qf" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.850536 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrdk7\" (UniqueName: \"kubernetes.io/projected/131eb8ce-e6be-487f-b698-370140a1a338-kube-api-access-rrdk7\") pod \"ovn-controller-ovs-qn9qf\" (UID: \"131eb8ce-e6be-487f-b698-370140a1a338\") " pod="openstack/ovn-controller-ovs-qn9qf" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.850574 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47p2x\" (UniqueName: \"kubernetes.io/projected/19953a4a-b2c2-42f5-a48b-a217cf7b7ab0-kube-api-access-47p2x\") pod \"ovn-controller-fvnsh\" (UID: \"19953a4a-b2c2-42f5-a48b-a217cf7b7ab0\") " pod="openstack/ovn-controller-fvnsh" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.850596 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/19953a4a-b2c2-42f5-a48b-a217cf7b7ab0-ovn-controller-tls-certs\") pod \"ovn-controller-fvnsh\" (UID: \"19953a4a-b2c2-42f5-a48b-a217cf7b7ab0\") " pod="openstack/ovn-controller-fvnsh" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.850616 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/131eb8ce-e6be-487f-b698-370140a1a338-var-run\") pod \"ovn-controller-ovs-qn9qf\" (UID: \"131eb8ce-e6be-487f-b698-370140a1a338\") " pod="openstack/ovn-controller-ovs-qn9qf" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.850643 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19953a4a-b2c2-42f5-a48b-a217cf7b7ab0-var-run-ovn\") pod \"ovn-controller-fvnsh\" (UID: \"19953a4a-b2c2-42f5-a48b-a217cf7b7ab0\") " pod="openstack/ovn-controller-fvnsh" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.850668 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19953a4a-b2c2-42f5-a48b-a217cf7b7ab0-var-log-ovn\") pod \"ovn-controller-fvnsh\" (UID: \"19953a4a-b2c2-42f5-a48b-a217cf7b7ab0\") " pod="openstack/ovn-controller-fvnsh" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.850683 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/131eb8ce-e6be-487f-b698-370140a1a338-var-log\") pod \"ovn-controller-ovs-qn9qf\" (UID: \"131eb8ce-e6be-487f-b698-370140a1a338\") " pod="openstack/ovn-controller-ovs-qn9qf" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.850731 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19953a4a-b2c2-42f5-a48b-a217cf7b7ab0-var-run\") pod \"ovn-controller-fvnsh\" (UID: \"19953a4a-b2c2-42f5-a48b-a217cf7b7ab0\") " pod="openstack/ovn-controller-fvnsh" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.850755 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/131eb8ce-e6be-487f-b698-370140a1a338-var-lib\") pod \"ovn-controller-ovs-qn9qf\" (UID: \"131eb8ce-e6be-487f-b698-370140a1a338\") " pod="openstack/ovn-controller-ovs-qn9qf" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.850769 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/131eb8ce-e6be-487f-b698-370140a1a338-scripts\") pod \"ovn-controller-ovs-qn9qf\" (UID: \"131eb8ce-e6be-487f-b698-370140a1a338\") " pod="openstack/ovn-controller-ovs-qn9qf" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.850794 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19953a4a-b2c2-42f5-a48b-a217cf7b7ab0-combined-ca-bundle\") pod \"ovn-controller-fvnsh\" (UID: \"19953a4a-b2c2-42f5-a48b-a217cf7b7ab0\") " pod="openstack/ovn-controller-fvnsh" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.851920 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19953a4a-b2c2-42f5-a48b-a217cf7b7ab0-var-run-ovn\") pod \"ovn-controller-fvnsh\" (UID: \"19953a4a-b2c2-42f5-a48b-a217cf7b7ab0\") " pod="openstack/ovn-controller-fvnsh" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.852050 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19953a4a-b2c2-42f5-a48b-a217cf7b7ab0-var-log-ovn\") pod \"ovn-controller-fvnsh\" (UID: \"19953a4a-b2c2-42f5-a48b-a217cf7b7ab0\") " pod="openstack/ovn-controller-fvnsh" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.852140 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19953a4a-b2c2-42f5-a48b-a217cf7b7ab0-var-run\") pod \"ovn-controller-fvnsh\" (UID: \"19953a4a-b2c2-42f5-a48b-a217cf7b7ab0\") " pod="openstack/ovn-controller-fvnsh" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.853576 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19953a4a-b2c2-42f5-a48b-a217cf7b7ab0-scripts\") pod \"ovn-controller-fvnsh\" (UID: \"19953a4a-b2c2-42f5-a48b-a217cf7b7ab0\") " pod="openstack/ovn-controller-fvnsh" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.855596 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19953a4a-b2c2-42f5-a48b-a217cf7b7ab0-combined-ca-bundle\") pod \"ovn-controller-fvnsh\" (UID: \"19953a4a-b2c2-42f5-a48b-a217cf7b7ab0\") " pod="openstack/ovn-controller-fvnsh" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.856580 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/19953a4a-b2c2-42f5-a48b-a217cf7b7ab0-ovn-controller-tls-certs\") pod \"ovn-controller-fvnsh\" (UID: \"19953a4a-b2c2-42f5-a48b-a217cf7b7ab0\") " pod="openstack/ovn-controller-fvnsh" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.867109 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47p2x\" (UniqueName: \"kubernetes.io/projected/19953a4a-b2c2-42f5-a48b-a217cf7b7ab0-kube-api-access-47p2x\") pod \"ovn-controller-fvnsh\" (UID: \"19953a4a-b2c2-42f5-a48b-a217cf7b7ab0\") " pod="openstack/ovn-controller-fvnsh" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.952941 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/131eb8ce-e6be-487f-b698-370140a1a338-var-log\") pod \"ovn-controller-ovs-qn9qf\" (UID: \"131eb8ce-e6be-487f-b698-370140a1a338\") " pod="openstack/ovn-controller-ovs-qn9qf" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.953028 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/131eb8ce-e6be-487f-b698-370140a1a338-var-lib\") pod \"ovn-controller-ovs-qn9qf\" (UID: \"131eb8ce-e6be-487f-b698-370140a1a338\") " pod="openstack/ovn-controller-ovs-qn9qf" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.953043 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/131eb8ce-e6be-487f-b698-370140a1a338-scripts\") pod \"ovn-controller-ovs-qn9qf\" (UID: \"131eb8ce-e6be-487f-b698-370140a1a338\") " pod="openstack/ovn-controller-ovs-qn9qf" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.953108 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/131eb8ce-e6be-487f-b698-370140a1a338-etc-ovs\") pod \"ovn-controller-ovs-qn9qf\" (UID: \"131eb8ce-e6be-487f-b698-370140a1a338\") " pod="openstack/ovn-controller-ovs-qn9qf" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.953123 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrdk7\" (UniqueName: \"kubernetes.io/projected/131eb8ce-e6be-487f-b698-370140a1a338-kube-api-access-rrdk7\") pod \"ovn-controller-ovs-qn9qf\" (UID: \"131eb8ce-e6be-487f-b698-370140a1a338\") " pod="openstack/ovn-controller-ovs-qn9qf" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.953146 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/131eb8ce-e6be-487f-b698-370140a1a338-var-run\") pod \"ovn-controller-ovs-qn9qf\" (UID: \"131eb8ce-e6be-487f-b698-370140a1a338\") " pod="openstack/ovn-controller-ovs-qn9qf" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.953266 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/131eb8ce-e6be-487f-b698-370140a1a338-var-run\") pod \"ovn-controller-ovs-qn9qf\" (UID: \"131eb8ce-e6be-487f-b698-370140a1a338\") " pod="openstack/ovn-controller-ovs-qn9qf" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.953434 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/131eb8ce-e6be-487f-b698-370140a1a338-var-log\") pod \"ovn-controller-ovs-qn9qf\" (UID: \"131eb8ce-e6be-487f-b698-370140a1a338\") " pod="openstack/ovn-controller-ovs-qn9qf" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.953587 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/131eb8ce-e6be-487f-b698-370140a1a338-var-lib\") pod \"ovn-controller-ovs-qn9qf\" (UID: \"131eb8ce-e6be-487f-b698-370140a1a338\") " pod="openstack/ovn-controller-ovs-qn9qf" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.955253 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/131eb8ce-e6be-487f-b698-370140a1a338-scripts\") pod \"ovn-controller-ovs-qn9qf\" (UID: \"131eb8ce-e6be-487f-b698-370140a1a338\") " pod="openstack/ovn-controller-ovs-qn9qf" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.955367 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/131eb8ce-e6be-487f-b698-370140a1a338-etc-ovs\") pod \"ovn-controller-ovs-qn9qf\" (UID: \"131eb8ce-e6be-487f-b698-370140a1a338\") " pod="openstack/ovn-controller-ovs-qn9qf" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.974575 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrdk7\" (UniqueName: \"kubernetes.io/projected/131eb8ce-e6be-487f-b698-370140a1a338-kube-api-access-rrdk7\") pod \"ovn-controller-ovs-qn9qf\" (UID: \"131eb8ce-e6be-487f-b698-370140a1a338\") " pod="openstack/ovn-controller-ovs-qn9qf" Feb 18 00:51:20 crc kubenswrapper[4858]: I0218 00:51:20.979343 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fvnsh" Feb 18 00:51:21 crc kubenswrapper[4858]: I0218 00:51:20.999877 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qn9qf" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.422540 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.425191 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.427858 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-dxgnx" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.428144 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.428225 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.428895 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.440300 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.616904 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-31c923aa-67ab-4081-8bd8-0bc56de8c2be\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-31c923aa-67ab-4081-8bd8-0bc56de8c2be\") pod \"ovsdbserver-sb-0\" (UID: \"af8bc938-e065-4d61-9abe-62806f59470d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.616978 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/af8bc938-e065-4d61-9abe-62806f59470d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"af8bc938-e065-4d61-9abe-62806f59470d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.617043 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8bc938-e065-4d61-9abe-62806f59470d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"af8bc938-e065-4d61-9abe-62806f59470d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.617092 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/af8bc938-e065-4d61-9abe-62806f59470d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"af8bc938-e065-4d61-9abe-62806f59470d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.617138 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af8bc938-e065-4d61-9abe-62806f59470d-config\") pod \"ovsdbserver-sb-0\" (UID: \"af8bc938-e065-4d61-9abe-62806f59470d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.617246 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af8bc938-e065-4d61-9abe-62806f59470d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"af8bc938-e065-4d61-9abe-62806f59470d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.617283 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngnl6\" (UniqueName: \"kubernetes.io/projected/af8bc938-e065-4d61-9abe-62806f59470d-kube-api-access-ngnl6\") pod \"ovsdbserver-sb-0\" (UID: \"af8bc938-e065-4d61-9abe-62806f59470d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.617335 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/af8bc938-e065-4d61-9abe-62806f59470d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"af8bc938-e065-4d61-9abe-62806f59470d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.719158 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af8bc938-e065-4d61-9abe-62806f59470d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"af8bc938-e065-4d61-9abe-62806f59470d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.719203 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngnl6\" (UniqueName: \"kubernetes.io/projected/af8bc938-e065-4d61-9abe-62806f59470d-kube-api-access-ngnl6\") pod \"ovsdbserver-sb-0\" (UID: \"af8bc938-e065-4d61-9abe-62806f59470d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.719233 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/af8bc938-e065-4d61-9abe-62806f59470d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"af8bc938-e065-4d61-9abe-62806f59470d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.719270 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-31c923aa-67ab-4081-8bd8-0bc56de8c2be\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-31c923aa-67ab-4081-8bd8-0bc56de8c2be\") pod \"ovsdbserver-sb-0\" (UID: \"af8bc938-e065-4d61-9abe-62806f59470d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.719296 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/af8bc938-e065-4d61-9abe-62806f59470d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"af8bc938-e065-4d61-9abe-62806f59470d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.719324 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8bc938-e065-4d61-9abe-62806f59470d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"af8bc938-e065-4d61-9abe-62806f59470d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.719344 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/af8bc938-e065-4d61-9abe-62806f59470d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"af8bc938-e065-4d61-9abe-62806f59470d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.719369 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af8bc938-e065-4d61-9abe-62806f59470d-config\") pod \"ovsdbserver-sb-0\" (UID: \"af8bc938-e065-4d61-9abe-62806f59470d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.720206 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/af8bc938-e065-4d61-9abe-62806f59470d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"af8bc938-e065-4d61-9abe-62806f59470d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.720225 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af8bc938-e065-4d61-9abe-62806f59470d-config\") pod \"ovsdbserver-sb-0\" (UID: \"af8bc938-e065-4d61-9abe-62806f59470d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.720410 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af8bc938-e065-4d61-9abe-62806f59470d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"af8bc938-e065-4d61-9abe-62806f59470d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.723291 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.723359 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-31c923aa-67ab-4081-8bd8-0bc56de8c2be\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-31c923aa-67ab-4081-8bd8-0bc56de8c2be\") pod \"ovsdbserver-sb-0\" (UID: \"af8bc938-e065-4d61-9abe-62806f59470d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c99e6464de12f834e292598aab1afa1cb1ea62c5855646ce65b79645406d0ebc/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.725883 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/af8bc938-e065-4d61-9abe-62806f59470d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"af8bc938-e065-4d61-9abe-62806f59470d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.733000 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8bc938-e065-4d61-9abe-62806f59470d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"af8bc938-e065-4d61-9abe-62806f59470d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.737628 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/af8bc938-e065-4d61-9abe-62806f59470d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"af8bc938-e065-4d61-9abe-62806f59470d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.741274 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngnl6\" (UniqueName: \"kubernetes.io/projected/af8bc938-e065-4d61-9abe-62806f59470d-kube-api-access-ngnl6\") pod \"ovsdbserver-sb-0\" (UID: \"af8bc938-e065-4d61-9abe-62806f59470d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:24 crc kubenswrapper[4858]: I0218 00:51:24.758377 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-31c923aa-67ab-4081-8bd8-0bc56de8c2be\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-31c923aa-67ab-4081-8bd8-0bc56de8c2be\") pod \"ovsdbserver-sb-0\" (UID: \"af8bc938-e065-4d61-9abe-62806f59470d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:25 crc kubenswrapper[4858]: I0218 00:51:25.052655 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:25 crc kubenswrapper[4858]: E0218 00:51:25.870259 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 00:51:25 crc kubenswrapper[4858]: E0218 00:51:25.870458 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-592zz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-96fdg_openstack(1dc7085a-564b-462e-8853-0c15a3d00f66): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 00:51:25 crc kubenswrapper[4858]: E0218 00:51:25.871711 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-96fdg" podUID="1dc7085a-564b-462e-8853-0c15a3d00f66" Feb 18 00:51:25 crc kubenswrapper[4858]: E0218 00:51:25.923661 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 00:51:25 crc kubenswrapper[4858]: E0218 00:51:25.924033 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2bgp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-g62s8_openstack(9befe7db-2687-4b07-ab13-9763231c95c3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 00:51:25 crc kubenswrapper[4858]: E0218 00:51:25.927259 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-g62s8" podUID="9befe7db-2687-4b07-ab13-9763231c95c3" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.436539 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5"] Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.438854 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.444664 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-http" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.444882 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca-bundle" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.450858 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-grpc" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.451062 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-config" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.451628 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-dockercfg-8t87k" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.452463 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5"] Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.554413 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0117af9e-cf65-489b-80f0-8f8c449baf92-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6mvr5\" (UID: \"0117af9e-cf65-489b-80f0-8f8c449baf92\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.554484 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0117af9e-cf65-489b-80f0-8f8c449baf92-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6mvr5\" (UID: \"0117af9e-cf65-489b-80f0-8f8c449baf92\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.554527 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mvd6\" (UniqueName: \"kubernetes.io/projected/0117af9e-cf65-489b-80f0-8f8c449baf92-kube-api-access-9mvd6\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6mvr5\" (UID: \"0117af9e-cf65-489b-80f0-8f8c449baf92\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.554586 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/0117af9e-cf65-489b-80f0-8f8c449baf92-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6mvr5\" (UID: \"0117af9e-cf65-489b-80f0-8f8c449baf92\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.554628 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/0117af9e-cf65-489b-80f0-8f8c449baf92-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6mvr5\" (UID: \"0117af9e-cf65-489b-80f0-8f8c449baf92\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.634460 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c"] Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.638393 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.644432 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c"] Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.645438 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-http" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.645524 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-loki-s3" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.645635 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-grpc" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.655462 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/0117af9e-cf65-489b-80f0-8f8c449baf92-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6mvr5\" (UID: \"0117af9e-cf65-489b-80f0-8f8c449baf92\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.655584 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/0117af9e-cf65-489b-80f0-8f8c449baf92-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6mvr5\" (UID: \"0117af9e-cf65-489b-80f0-8f8c449baf92\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.655639 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0117af9e-cf65-489b-80f0-8f8c449baf92-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6mvr5\" (UID: \"0117af9e-cf65-489b-80f0-8f8c449baf92\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.655682 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0117af9e-cf65-489b-80f0-8f8c449baf92-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6mvr5\" (UID: \"0117af9e-cf65-489b-80f0-8f8c449baf92\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.655704 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mvd6\" (UniqueName: \"kubernetes.io/projected/0117af9e-cf65-489b-80f0-8f8c449baf92-kube-api-access-9mvd6\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6mvr5\" (UID: \"0117af9e-cf65-489b-80f0-8f8c449baf92\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.656727 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0117af9e-cf65-489b-80f0-8f8c449baf92-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6mvr5\" (UID: \"0117af9e-cf65-489b-80f0-8f8c449baf92\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.656846 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0117af9e-cf65-489b-80f0-8f8c449baf92-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6mvr5\" (UID: \"0117af9e-cf65-489b-80f0-8f8c449baf92\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.667546 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/0117af9e-cf65-489b-80f0-8f8c449baf92-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6mvr5\" (UID: \"0117af9e-cf65-489b-80f0-8f8c449baf92\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.673781 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/0117af9e-cf65-489b-80f0-8f8c449baf92-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6mvr5\" (UID: \"0117af9e-cf65-489b-80f0-8f8c449baf92\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.679103 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mvd6\" (UniqueName: \"kubernetes.io/projected/0117af9e-cf65-489b-80f0-8f8c449baf92-kube-api-access-9mvd6\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6mvr5\" (UID: \"0117af9e-cf65-489b-80f0-8f8c449baf92\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.731038 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9"] Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.738381 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.740981 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-grpc" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.746023 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9"] Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.747052 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-http" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.757010 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cb4efd7-58cc-48fa-8d37-cd5d97add16c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-v9f9c\" (UID: \"8cb4efd7-58cc-48fa-8d37-cd5d97add16c\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.757090 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/8cb4efd7-58cc-48fa-8d37-cd5d97add16c-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-v9f9c\" (UID: \"8cb4efd7-58cc-48fa-8d37-cd5d97add16c\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.757118 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cb4efd7-58cc-48fa-8d37-cd5d97add16c-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-v9f9c\" (UID: \"8cb4efd7-58cc-48fa-8d37-cd5d97add16c\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.757135 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/8cb4efd7-58cc-48fa-8d37-cd5d97add16c-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-v9f9c\" (UID: \"8cb4efd7-58cc-48fa-8d37-cd5d97add16c\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.757178 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn2nl\" (UniqueName: \"kubernetes.io/projected/8cb4efd7-58cc-48fa-8d37-cd5d97add16c-kube-api-access-nn2nl\") pod \"cloudkitty-lokistack-querier-58c84b5844-v9f9c\" (UID: \"8cb4efd7-58cc-48fa-8d37-cd5d97add16c\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.757213 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/8cb4efd7-58cc-48fa-8d37-cd5d97add16c-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-v9f9c\" (UID: \"8cb4efd7-58cc-48fa-8d37-cd5d97add16c\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.807757 4858 generic.go:334] "Generic (PLEG): container finished" podID="88738498-130e-438a-a822-f9946add222c" containerID="0d10afc14d76c0937c04c7e03706d601e828f76a3dc3f6e2cf423b26916e9e60" exitCode=0 Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.808437 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-nsq62" event={"ID":"88738498-130e-438a-a822-f9946add222c","Type":"ContainerDied","Data":"0d10afc14d76c0937c04c7e03706d601e828f76a3dc3f6e2cf423b26916e9e60"} Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.812361 4858 generic.go:334] "Generic (PLEG): container finished" podID="1783fb29-f6d7-47ae-8320-863d18857042" containerID="72c2a20168959b261983fe0a73267472e734e1a5ad374ee38f69849a234d483e" exitCode=0 Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.813145 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7gzrt" event={"ID":"1783fb29-f6d7-47ae-8320-863d18857042","Type":"ContainerDied","Data":"72c2a20168959b261983fe0a73267472e734e1a5ad374ee38f69849a234d483e"} Feb 18 00:51:26 crc kubenswrapper[4858]: W0218 00:51:26.827043 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda845f908_18e9_47e2_bc4f_01308c8a69b3.slice/crio-e2197ebf5628b1dd088450479998ab56f6212913e6626b4ead7ea0605dc761ee WatchSource:0}: Error finding container e2197ebf5628b1dd088450479998ab56f6212913e6626b4ead7ea0605dc761ee: Status 404 returned error can't find the container with id e2197ebf5628b1dd088450479998ab56f6212913e6626b4ead7ea0605dc761ee Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.835824 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.871864 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/a78eeeda-46f2-4d10-b160-97d477d1d80e-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9\" (UID: \"a78eeeda-46f2-4d10-b160-97d477d1d80e\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.871928 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/a78eeeda-46f2-4d10-b160-97d477d1d80e-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9\" (UID: \"a78eeeda-46f2-4d10-b160-97d477d1d80e\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.871968 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cb4efd7-58cc-48fa-8d37-cd5d97add16c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-v9f9c\" (UID: \"8cb4efd7-58cc-48fa-8d37-cd5d97add16c\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.871998 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a78eeeda-46f2-4d10-b160-97d477d1d80e-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9\" (UID: \"a78eeeda-46f2-4d10-b160-97d477d1d80e\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.872030 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/8cb4efd7-58cc-48fa-8d37-cd5d97add16c-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-v9f9c\" (UID: \"8cb4efd7-58cc-48fa-8d37-cd5d97add16c\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.872053 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cb4efd7-58cc-48fa-8d37-cd5d97add16c-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-v9f9c\" (UID: \"8cb4efd7-58cc-48fa-8d37-cd5d97add16c\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.872075 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/8cb4efd7-58cc-48fa-8d37-cd5d97add16c-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-v9f9c\" (UID: \"8cb4efd7-58cc-48fa-8d37-cd5d97add16c\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.872098 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgr6j\" (UniqueName: \"kubernetes.io/projected/a78eeeda-46f2-4d10-b160-97d477d1d80e-kube-api-access-cgr6j\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9\" (UID: \"a78eeeda-46f2-4d10-b160-97d477d1d80e\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.872127 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn2nl\" (UniqueName: \"kubernetes.io/projected/8cb4efd7-58cc-48fa-8d37-cd5d97add16c-kube-api-access-nn2nl\") pod \"cloudkitty-lokistack-querier-58c84b5844-v9f9c\" (UID: \"8cb4efd7-58cc-48fa-8d37-cd5d97add16c\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.872152 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/8cb4efd7-58cc-48fa-8d37-cd5d97add16c-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-v9f9c\" (UID: \"8cb4efd7-58cc-48fa-8d37-cd5d97add16c\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.872174 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a78eeeda-46f2-4d10-b160-97d477d1d80e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9\" (UID: \"a78eeeda-46f2-4d10-b160-97d477d1d80e\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.872944 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cb4efd7-58cc-48fa-8d37-cd5d97add16c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-v9f9c\" (UID: \"8cb4efd7-58cc-48fa-8d37-cd5d97add16c\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.873405 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cb4efd7-58cc-48fa-8d37-cd5d97add16c-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-v9f9c\" (UID: \"8cb4efd7-58cc-48fa-8d37-cd5d97add16c\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.885655 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.899906 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/8cb4efd7-58cc-48fa-8d37-cd5d97add16c-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-v9f9c\" (UID: \"8cb4efd7-58cc-48fa-8d37-cd5d97add16c\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.902300 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/8cb4efd7-58cc-48fa-8d37-cd5d97add16c-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-v9f9c\" (UID: \"8cb4efd7-58cc-48fa-8d37-cd5d97add16c\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.903152 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.914632 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/8cb4efd7-58cc-48fa-8d37-cd5d97add16c-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-v9f9c\" (UID: \"8cb4efd7-58cc-48fa-8d37-cd5d97add16c\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.924595 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn2nl\" (UniqueName: \"kubernetes.io/projected/8cb4efd7-58cc-48fa-8d37-cd5d97add16c-kube-api-access-nn2nl\") pod \"cloudkitty-lokistack-querier-58c84b5844-v9f9c\" (UID: \"8cb4efd7-58cc-48fa-8d37-cd5d97add16c\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.966292 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.972214 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf"] Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.973914 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a78eeeda-46f2-4d10-b160-97d477d1d80e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9\" (UID: \"a78eeeda-46f2-4d10-b160-97d477d1d80e\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.973964 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/a78eeeda-46f2-4d10-b160-97d477d1d80e-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9\" (UID: \"a78eeeda-46f2-4d10-b160-97d477d1d80e\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.974041 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/a78eeeda-46f2-4d10-b160-97d477d1d80e-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9\" (UID: \"a78eeeda-46f2-4d10-b160-97d477d1d80e\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.974141 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a78eeeda-46f2-4d10-b160-97d477d1d80e-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9\" (UID: \"a78eeeda-46f2-4d10-b160-97d477d1d80e\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.974190 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgr6j\" (UniqueName: \"kubernetes.io/projected/a78eeeda-46f2-4d10-b160-97d477d1d80e-kube-api-access-cgr6j\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9\" (UID: \"a78eeeda-46f2-4d10-b160-97d477d1d80e\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.974393 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.975632 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a78eeeda-46f2-4d10-b160-97d477d1d80e-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9\" (UID: \"a78eeeda-46f2-4d10-b160-97d477d1d80e\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.978660 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a78eeeda-46f2-4d10-b160-97d477d1d80e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9\" (UID: \"a78eeeda-46f2-4d10-b160-97d477d1d80e\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.980611 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway-ca-bundle" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.980855 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.980916 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-http" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.980879 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.981059 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/a78eeeda-46f2-4d10-b160-97d477d1d80e-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9\" (UID: \"a78eeeda-46f2-4d10-b160-97d477d1d80e\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.981179 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-client-http" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.981449 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 18 00:51:26 crc kubenswrapper[4858]: I0218 00:51:26.989824 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/a78eeeda-46f2-4d10-b160-97d477d1d80e-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9\" (UID: \"a78eeeda-46f2-4d10-b160-97d477d1d80e\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.010165 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8"] Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.014434 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.016409 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-dockercfg-fb4jw" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.021738 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgr6j\" (UniqueName: \"kubernetes.io/projected/a78eeeda-46f2-4d10-b160-97d477d1d80e-kube-api-access-cgr6j\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9\" (UID: \"a78eeeda-46f2-4d10-b160-97d477d1d80e\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.046971 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf"] Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.053113 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.076311 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.076403 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t59rx\" (UniqueName: \"kubernetes.io/projected/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-kube-api-access-t59rx\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.076470 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.076515 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.076544 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.076641 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.076695 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.076724 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.076768 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.093548 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8"] Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.180618 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.180728 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.180761 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.180787 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.180807 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.180833 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.180853 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.180869 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.180901 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.180919 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.180936 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.180966 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.180987 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t59rx\" (UniqueName: \"kubernetes.io/projected/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-kube-api-access-t59rx\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.181007 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr4p7\" (UniqueName: \"kubernetes.io/projected/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-kube-api-access-nr4p7\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.181031 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.181049 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.181072 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.181092 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.182067 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: E0218 00:51:27.182343 4858 secret.go:188] Couldn't get secret openstack/cloudkitty-lokistack-gateway-http: secret "cloudkitty-lokistack-gateway-http" not found Feb 18 00:51:27 crc kubenswrapper[4858]: E0218 00:51:27.182380 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-tls-secret podName:ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14 nodeName:}" failed. No retries permitted until 2026-02-18 00:51:27.682367152 +0000 UTC m=+1040.988203884 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-tls-secret") pod "cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" (UID: "ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14") : secret "cloudkitty-lokistack-gateway-http" not found Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.182989 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.183876 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.184898 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.188024 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.192400 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.194799 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.204366 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t59rx\" (UniqueName: \"kubernetes.io/projected/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-kube-api-access-t59rx\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.282698 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.282786 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.282814 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.282860 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.282897 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr4p7\" (UniqueName: \"kubernetes.io/projected/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-kube-api-access-nr4p7\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.282959 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.283006 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.283042 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.283081 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.287688 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.288812 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.290127 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.291274 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.292153 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.297982 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.298280 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.300303 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.301792 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr4p7\" (UniqueName: \"kubernetes.io/projected/7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c-kube-api-access-nr4p7\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-755l8\" (UID: \"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.400124 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.559323 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-g62s8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.616997 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.617745 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-96fdg" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.618064 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.620292 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.620614 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.633215 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.690040 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9befe7db-2687-4b07-ab13-9763231c95c3-config\") pod \"9befe7db-2687-4b07-ab13-9763231c95c3\" (UID: \"9befe7db-2687-4b07-ab13-9763231c95c3\") " Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.690217 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bgp7\" (UniqueName: \"kubernetes.io/projected/9befe7db-2687-4b07-ab13-9763231c95c3-kube-api-access-2bgp7\") pod \"9befe7db-2687-4b07-ab13-9763231c95c3\" (UID: \"9befe7db-2687-4b07-ab13-9763231c95c3\") " Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.690413 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.692144 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9befe7db-2687-4b07-ab13-9763231c95c3-config" (OuterVolumeSpecName: "config") pod "9befe7db-2687-4b07-ab13-9763231c95c3" (UID: "9befe7db-2687-4b07-ab13-9763231c95c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.694375 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vtwxf\" (UID: \"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.698087 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9befe7db-2687-4b07-ab13-9763231c95c3-kube-api-access-2bgp7" (OuterVolumeSpecName: "kube-api-access-2bgp7") pod "9befe7db-2687-4b07-ab13-9763231c95c3" (UID: "9befe7db-2687-4b07-ab13-9763231c95c3"). InnerVolumeSpecName "kube-api-access-2bgp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.700598 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.701886 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.706240 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.706432 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.713641 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.721722 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 18 00:51:27 crc kubenswrapper[4858]: W0218 00:51:27.742684 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9788397b_0bb7_43f9_9ac8_69b765750ecb.slice/crio-d4e22b2b7c18ba4c75ee11324ba5c7879101602bb721e9f5336bd3bd24e1663c WatchSource:0}: Error finding container d4e22b2b7c18ba4c75ee11324ba5c7879101602bb721e9f5336bd3bd24e1663c: Status 404 returned error can't find the container with id d4e22b2b7c18ba4c75ee11324ba5c7879101602bb721e9f5336bd3bd24e1663c Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.766946 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.782740 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.791429 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dc7085a-564b-462e-8853-0c15a3d00f66-dns-svc\") pod \"1dc7085a-564b-462e-8853-0c15a3d00f66\" (UID: \"1dc7085a-564b-462e-8853-0c15a3d00f66\") " Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.791464 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-592zz\" (UniqueName: \"kubernetes.io/projected/1dc7085a-564b-462e-8853-0c15a3d00f66-kube-api-access-592zz\") pod \"1dc7085a-564b-462e-8853-0c15a3d00f66\" (UID: \"1dc7085a-564b-462e-8853-0c15a3d00f66\") " Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.791585 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dc7085a-564b-462e-8853-0c15a3d00f66-config\") pod \"1dc7085a-564b-462e-8853-0c15a3d00f66\" (UID: \"1dc7085a-564b-462e-8853-0c15a3d00f66\") " Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.791895 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/c716bb3e-01b1-4bc7-a9a2-4604faf684f0-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c716bb3e-01b1-4bc7-a9a2-4604faf684f0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.792089 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/c716bb3e-01b1-4bc7-a9a2-4604faf684f0-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c716bb3e-01b1-4bc7-a9a2-4604faf684f0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.792113 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c716bb3e-01b1-4bc7-a9a2-4604faf684f0-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c716bb3e-01b1-4bc7-a9a2-4604faf684f0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.792139 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c716bb3e-01b1-4bc7-a9a2-4604faf684f0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.792198 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9952t\" (UniqueName: \"kubernetes.io/projected/c716bb3e-01b1-4bc7-a9a2-4604faf684f0-kube-api-access-9952t\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c716bb3e-01b1-4bc7-a9a2-4604faf684f0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.792217 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c716bb3e-01b1-4bc7-a9a2-4604faf684f0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c716bb3e-01b1-4bc7-a9a2-4604faf684f0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.792233 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c716bb3e-01b1-4bc7-a9a2-4604faf684f0-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c716bb3e-01b1-4bc7-a9a2-4604faf684f0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.792251 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c716bb3e-01b1-4bc7-a9a2-4604faf684f0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.792299 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9befe7db-2687-4b07-ab13-9763231c95c3-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.792309 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bgp7\" (UniqueName: \"kubernetes.io/projected/9befe7db-2687-4b07-ab13-9763231c95c3-kube-api-access-2bgp7\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.795896 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dc7085a-564b-462e-8853-0c15a3d00f66-config" (OuterVolumeSpecName: "config") pod "1dc7085a-564b-462e-8853-0c15a3d00f66" (UID: "1dc7085a-564b-462e-8853-0c15a3d00f66"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.798553 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dc7085a-564b-462e-8853-0c15a3d00f66-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1dc7085a-564b-462e-8853-0c15a3d00f66" (UID: "1dc7085a-564b-462e-8853-0c15a3d00f66"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.807200 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.813402 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fvnsh"] Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.816977 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dc7085a-564b-462e-8853-0c15a3d00f66-kube-api-access-592zz" (OuterVolumeSpecName: "kube-api-access-592zz") pod "1dc7085a-564b-462e-8853-0c15a3d00f66" (UID: "1dc7085a-564b-462e-8853-0c15a3d00f66"). InnerVolumeSpecName "kube-api-access-592zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.821380 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.822432 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.826204 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.826402 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.829402 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.836743 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-g62s8" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.838380 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-g62s8" event={"ID":"9befe7db-2687-4b07-ab13-9763231c95c3","Type":"ContainerDied","Data":"82a380621fad6a9cd8e9f105b8b6cc996e49fedb14dd9da5efb9b45c76b675ba"} Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.838478 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.841384 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fvnsh" event={"ID":"19953a4a-b2c2-42f5-a48b-a217cf7b7ab0","Type":"ContainerStarted","Data":"d6a7278fb56af173952ca598db79eaef41cd29a599cff0a7baf7297d9af58fbe"} Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.842550 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"31807c8a-5224-4df1-a761-10031d623fa5","Type":"ContainerStarted","Data":"736fa7455384476027f1b3ce6086938b9aa46034acf56998638a2d010f09cb83"} Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.843918 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"acb8b920-9bb7-42b7-8bf7-e8f6b5880654","Type":"ContainerStarted","Data":"29f7d539a83b63d838e07872db640c29e90bd47ea0028db85ded13302e8328d4"} Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.847243 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7gzrt" event={"ID":"1783fb29-f6d7-47ae-8320-863d18857042","Type":"ContainerStarted","Data":"132295f522f7ffb813bdb12d797807eece86176caf07a7471c216f5e52436e9c"} Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.849740 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.849797 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-7gzrt" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.850291 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a53fffdd-3f92-4632-8391-cc89792884a8","Type":"ContainerStarted","Data":"4cf895fe2a11b21581bdc4078cb41798a6cb27b9b697859b399ed1db1d84cd97"} Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.879728 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-nsq62" event={"ID":"88738498-130e-438a-a822-f9946add222c","Type":"ContainerStarted","Data":"de58294032ead23981559bf6fc7570e4f2389647725697c8bd5eabbe6325ead1"} Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.880730 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-nsq62" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.889809 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-96fdg" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.890442 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-96fdg" event={"ID":"1dc7085a-564b-462e-8853-0c15a3d00f66","Type":"ContainerDied","Data":"c6f67b3ca519fd8e758037cb383dd0bf69667222e37200c0e50159b6f4652fef"} Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.897025 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c716bb3e-01b1-4bc7-a9a2-4604faf684f0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c716bb3e-01b1-4bc7-a9a2-4604faf684f0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.897077 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c716bb3e-01b1-4bc7-a9a2-4604faf684f0-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c716bb3e-01b1-4bc7-a9a2-4604faf684f0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.897110 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c716bb3e-01b1-4bc7-a9a2-4604faf684f0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.897153 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/284a610d-47d0-4f89-925c-c28aabef77e0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"284a610d-47d0-4f89-925c-c28aabef77e0\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.897213 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"284a610d-47d0-4f89-925c-c28aabef77e0\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.897282 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/c716bb3e-01b1-4bc7-a9a2-4604faf684f0-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c716bb3e-01b1-4bc7-a9a2-4604faf684f0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.897306 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/c716bb3e-01b1-4bc7-a9a2-4604faf684f0-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c716bb3e-01b1-4bc7-a9a2-4604faf684f0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.897338 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c716bb3e-01b1-4bc7-a9a2-4604faf684f0-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c716bb3e-01b1-4bc7-a9a2-4604faf684f0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.897378 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c716bb3e-01b1-4bc7-a9a2-4604faf684f0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.897416 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/284a610d-47d0-4f89-925c-c28aabef77e0-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"284a610d-47d0-4f89-925c-c28aabef77e0\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.897457 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/284a610d-47d0-4f89-925c-c28aabef77e0-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"284a610d-47d0-4f89-925c-c28aabef77e0\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.899468 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284a610d-47d0-4f89-925c-c28aabef77e0-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"284a610d-47d0-4f89-925c-c28aabef77e0\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.899535 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/284a610d-47d0-4f89-925c-c28aabef77e0-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"284a610d-47d0-4f89-925c-c28aabef77e0\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.899574 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz7sr\" (UniqueName: \"kubernetes.io/projected/284a610d-47d0-4f89-925c-c28aabef77e0-kube-api-access-hz7sr\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"284a610d-47d0-4f89-925c-c28aabef77e0\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.899629 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9952t\" (UniqueName: \"kubernetes.io/projected/c716bb3e-01b1-4bc7-a9a2-4604faf684f0-kube-api-access-9952t\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c716bb3e-01b1-4bc7-a9a2-4604faf684f0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.899751 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dc7085a-564b-462e-8853-0c15a3d00f66-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.899764 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-592zz\" (UniqueName: \"kubernetes.io/projected/1dc7085a-564b-462e-8853-0c15a3d00f66-kube-api-access-592zz\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.899775 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dc7085a-564b-462e-8853-0c15a3d00f66-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.899850 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c716bb3e-01b1-4bc7-a9a2-4604faf684f0\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.900173 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9788397b-0bb7-43f9-9ac8-69b765750ecb","Type":"ContainerStarted","Data":"d4e22b2b7c18ba4c75ee11324ba5c7879101602bb721e9f5336bd3bd24e1663c"} Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.900994 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c716bb3e-01b1-4bc7-a9a2-4604faf684f0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c716bb3e-01b1-4bc7-a9a2-4604faf684f0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.902382 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c716bb3e-01b1-4bc7-a9a2-4604faf684f0\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.902433 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/c716bb3e-01b1-4bc7-a9a2-4604faf684f0-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c716bb3e-01b1-4bc7-a9a2-4604faf684f0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.903123 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c716bb3e-01b1-4bc7-a9a2-4604faf684f0-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c716bb3e-01b1-4bc7-a9a2-4604faf684f0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.905443 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/c716bb3e-01b1-4bc7-a9a2-4604faf684f0-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c716bb3e-01b1-4bc7-a9a2-4604faf684f0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.911638 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"675206cd-1619-4598-81a9-96b66d09ce88","Type":"ContainerStarted","Data":"718a2b165d3925b61222f071f999911d0afe01da0733d8b305f2ebc444a64677"} Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.916535 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9952t\" (UniqueName: \"kubernetes.io/projected/c716bb3e-01b1-4bc7-a9a2-4604faf684f0-kube-api-access-9952t\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c716bb3e-01b1-4bc7-a9a2-4604faf684f0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.929283 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"69e5abda-5efa-402f-b66c-320cf6ed1d99","Type":"ContainerStarted","Data":"430d024cc146802f5cbaa680bde2e27004eb97d4308e9a735089b6e85ceaa406"} Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.932224 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-7gzrt" podStartSLOduration=6.214046561 podStartE2EDuration="18.932210196s" podCreationTimestamp="2026-02-18 00:51:09 +0000 UTC" firstStartedPulling="2026-02-18 00:51:13.361258689 +0000 UTC m=+1026.667095431" lastFinishedPulling="2026-02-18 00:51:26.079422334 +0000 UTC m=+1039.385259066" observedRunningTime="2026-02-18 00:51:27.868247715 +0000 UTC m=+1041.174084447" watchObservedRunningTime="2026-02-18 00:51:27.932210196 +0000 UTC m=+1041.238046918" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.949491 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c716bb3e-01b1-4bc7-a9a2-4604faf684f0-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c716bb3e-01b1-4bc7-a9a2-4604faf684f0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.950127 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a845f908-18e9-47e2-bc4f-01308c8a69b3","Type":"ContainerStarted","Data":"e2197ebf5628b1dd088450479998ab56f6212913e6626b4ead7ea0605dc761ee"} Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.950268 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c716bb3e-01b1-4bc7-a9a2-4604faf684f0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.952356 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c716bb3e-01b1-4bc7-a9a2-4604faf684f0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.955698 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"22183a64-a68c-47af-8352-b04603981c9d","Type":"ContainerStarted","Data":"243fd30260252e0fd3daabab026cbb1b625d9dff9e679c2f010fcc5764ebe08b"} Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.956208 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-qn9qf"] Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.956421 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-nsq62" podStartSLOduration=3.583335212 podStartE2EDuration="19.956407376s" podCreationTimestamp="2026-02-18 00:51:08 +0000 UTC" firstStartedPulling="2026-02-18 00:51:09.673281103 +0000 UTC m=+1022.979117825" lastFinishedPulling="2026-02-18 00:51:26.046353257 +0000 UTC m=+1039.352189989" observedRunningTime="2026-02-18 00:51:27.898147786 +0000 UTC m=+1041.203984518" watchObservedRunningTime="2026-02-18 00:51:27.956407376 +0000 UTC m=+1041.262244108" Feb 18 00:51:27 crc kubenswrapper[4858]: I0218 00:51:27.971155 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.001440 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abc34ee9-ce6b-404e-b4d0-bd6211a3bc72-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"abc34ee9-ce6b-404e-b4d0-bd6211a3bc72\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.001518 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"284a610d-47d0-4f89-925c-c28aabef77e0\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.001640 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/abc34ee9-ce6b-404e-b4d0-bd6211a3bc72-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"abc34ee9-ce6b-404e-b4d0-bd6211a3bc72\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.001664 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/284a610d-47d0-4f89-925c-c28aabef77e0-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"284a610d-47d0-4f89-925c-c28aabef77e0\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.001705 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/abc34ee9-ce6b-404e-b4d0-bd6211a3bc72-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"abc34ee9-ce6b-404e-b4d0-bd6211a3bc72\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.001728 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/284a610d-47d0-4f89-925c-c28aabef77e0-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"284a610d-47d0-4f89-925c-c28aabef77e0\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.001751 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284a610d-47d0-4f89-925c-c28aabef77e0-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"284a610d-47d0-4f89-925c-c28aabef77e0\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.001769 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/284a610d-47d0-4f89-925c-c28aabef77e0-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"284a610d-47d0-4f89-925c-c28aabef77e0\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.001796 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz7sr\" (UniqueName: \"kubernetes.io/projected/284a610d-47d0-4f89-925c-c28aabef77e0-kube-api-access-hz7sr\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"284a610d-47d0-4f89-925c-c28aabef77e0\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.001818 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt5vq\" (UniqueName: \"kubernetes.io/projected/abc34ee9-ce6b-404e-b4d0-bd6211a3bc72-kube-api-access-wt5vq\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"abc34ee9-ce6b-404e-b4d0-bd6211a3bc72\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.001834 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"abc34ee9-ce6b-404e-b4d0-bd6211a3bc72\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.001851 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abc34ee9-ce6b-404e-b4d0-bd6211a3bc72-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"abc34ee9-ce6b-404e-b4d0-bd6211a3bc72\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.001869 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/abc34ee9-ce6b-404e-b4d0-bd6211a3bc72-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"abc34ee9-ce6b-404e-b4d0-bd6211a3bc72\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.001889 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/284a610d-47d0-4f89-925c-c28aabef77e0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"284a610d-47d0-4f89-925c-c28aabef77e0\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.002323 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"284a610d-47d0-4f89-925c-c28aabef77e0\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.002715 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/284a610d-47d0-4f89-925c-c28aabef77e0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"284a610d-47d0-4f89-925c-c28aabef77e0\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.004222 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284a610d-47d0-4f89-925c-c28aabef77e0-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"284a610d-47d0-4f89-925c-c28aabef77e0\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.008421 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/284a610d-47d0-4f89-925c-c28aabef77e0-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"284a610d-47d0-4f89-925c-c28aabef77e0\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.009245 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/284a610d-47d0-4f89-925c-c28aabef77e0-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"284a610d-47d0-4f89-925c-c28aabef77e0\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.013285 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/284a610d-47d0-4f89-925c-c28aabef77e0-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"284a610d-47d0-4f89-925c-c28aabef77e0\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.024479 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz7sr\" (UniqueName: \"kubernetes.io/projected/284a610d-47d0-4f89-925c-c28aabef77e0-kube-api-access-hz7sr\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"284a610d-47d0-4f89-925c-c28aabef77e0\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.047705 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"284a610d-47d0-4f89-925c-c28aabef77e0\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.084602 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9"] Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.096414 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5"] Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.102906 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abc34ee9-ce6b-404e-b4d0-bd6211a3bc72-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"abc34ee9-ce6b-404e-b4d0-bd6211a3bc72\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.103023 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/abc34ee9-ce6b-404e-b4d0-bd6211a3bc72-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"abc34ee9-ce6b-404e-b4d0-bd6211a3bc72\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.103060 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/abc34ee9-ce6b-404e-b4d0-bd6211a3bc72-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"abc34ee9-ce6b-404e-b4d0-bd6211a3bc72\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.103113 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt5vq\" (UniqueName: \"kubernetes.io/projected/abc34ee9-ce6b-404e-b4d0-bd6211a3bc72-kube-api-access-wt5vq\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"abc34ee9-ce6b-404e-b4d0-bd6211a3bc72\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.103141 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"abc34ee9-ce6b-404e-b4d0-bd6211a3bc72\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.103165 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abc34ee9-ce6b-404e-b4d0-bd6211a3bc72-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"abc34ee9-ce6b-404e-b4d0-bd6211a3bc72\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.103191 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/abc34ee9-ce6b-404e-b4d0-bd6211a3bc72-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"abc34ee9-ce6b-404e-b4d0-bd6211a3bc72\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.103918 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"abc34ee9-ce6b-404e-b4d0-bd6211a3bc72\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.108350 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abc34ee9-ce6b-404e-b4d0-bd6211a3bc72-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"abc34ee9-ce6b-404e-b4d0-bd6211a3bc72\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.112033 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/abc34ee9-ce6b-404e-b4d0-bd6211a3bc72-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"abc34ee9-ce6b-404e-b4d0-bd6211a3bc72\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.113751 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abc34ee9-ce6b-404e-b4d0-bd6211a3bc72-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"abc34ee9-ce6b-404e-b4d0-bd6211a3bc72\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.114332 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-g62s8"] Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.125808 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/abc34ee9-ce6b-404e-b4d0-bd6211a3bc72-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"abc34ee9-ce6b-404e-b4d0-bd6211a3bc72\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.125951 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-g62s8"] Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.128273 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/abc34ee9-ce6b-404e-b4d0-bd6211a3bc72-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"abc34ee9-ce6b-404e-b4d0-bd6211a3bc72\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.128893 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt5vq\" (UniqueName: \"kubernetes.io/projected/abc34ee9-ce6b-404e-b4d0-bd6211a3bc72-kube-api-access-wt5vq\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"abc34ee9-ce6b-404e-b4d0-bd6211a3bc72\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.133687 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c"] Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.139523 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 00:51:28 crc kubenswrapper[4858]: W0218 00:51:28.140182 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7eb932c6_138e_44fc_b382_6e702ea9d39b.slice/crio-ea84f5ff3da828c6b731e2eef036270df2fd08e8caaa8acb6bb64476a7d6107c WatchSource:0}: Error finding container ea84f5ff3da828c6b731e2eef036270df2fd08e8caaa8acb6bb64476a7d6107c: Status 404 returned error can't find the container with id ea84f5ff3da828c6b731e2eef036270df2fd08e8caaa8acb6bb64476a7d6107c Feb 18 00:51:28 crc kubenswrapper[4858]: W0218 00:51:28.142453 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cb4efd7_58cc_48fa_8d37_cd5d97add16c.slice/crio-59aa1aa841d0457f34dc12787f651b275c25f9ca3ae7fea1b92a5ef20c7a4edf WatchSource:0}: Error finding container 59aa1aa841d0457f34dc12787f651b275c25f9ca3ae7fea1b92a5ef20c7a4edf: Status 404 returned error can't find the container with id 59aa1aa841d0457f34dc12787f651b275c25f9ca3ae7fea1b92a5ef20c7a4edf Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.145130 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"abc34ee9-ce6b-404e-b4d0-bd6211a3bc72\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 18 00:51:28 crc kubenswrapper[4858]: E0218 00:51:28.148269 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-querier,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=querier -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-querier-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-querier-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nn2nl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-querier-58c84b5844-v9f9c_openstack(8cb4efd7-58cc-48fa-8d37-cd5d97add16c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 00:51:28 crc kubenswrapper[4858]: E0218 00:51:28.149583 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-querier\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" podUID="8cb4efd7-58cc-48fa-8d37-cd5d97add16c" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.157407 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-96fdg"] Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.167266 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-96fdg"] Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.202783 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8"] Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.255578 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.330008 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.358881 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.476055 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf"] Feb 18 00:51:28 crc kubenswrapper[4858]: W0218 00:51:28.704377 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef8bfa00_4587_4b2d_9fa9_3f58d3b4ed14.slice/crio-0db220df6f2f5a4e6b4e2bc6d73d1a2f3c6b10e65a021d0b5dc9588ef69c1f7b WatchSource:0}: Error finding container 0db220df6f2f5a4e6b4e2bc6d73d1a2f3c6b10e65a021d0b5dc9588ef69c1f7b: Status 404 returned error can't find the container with id 0db220df6f2f5a4e6b4e2bc6d73d1a2f3c6b10e65a021d0b5dc9588ef69c1f7b Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.820865 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.882709 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.894719 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.968917 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9" event={"ID":"a78eeeda-46f2-4d10-b160-97d477d1d80e","Type":"ContainerStarted","Data":"73b43f8df5a7fc17fc068dd5b11abd6735d9d19a91023c82d68a64de219a8410"} Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.971594 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" event={"ID":"8cb4efd7-58cc-48fa-8d37-cd5d97add16c","Type":"ContainerStarted","Data":"59aa1aa841d0457f34dc12787f651b275c25f9ca3ae7fea1b92a5ef20c7a4edf"} Feb 18 00:51:28 crc kubenswrapper[4858]: E0218 00:51:28.973175 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-querier\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" podUID="8cb4efd7-58cc-48fa-8d37-cd5d97add16c" Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.973794 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" event={"ID":"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c","Type":"ContainerStarted","Data":"8b07fbbf9692530a786bc52e0f53c8a240a7e5d248ada4632bf8c0d99a9b7107"} Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.979143 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5" event={"ID":"0117af9e-cf65-489b-80f0-8f8c449baf92","Type":"ContainerStarted","Data":"fb2ec460fda30783b3c5461527d5f5d748cb6d059ca919dc7d1597d976bfd2ae"} Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.980550 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qn9qf" event={"ID":"131eb8ce-e6be-487f-b698-370140a1a338","Type":"ContainerStarted","Data":"690cb9fc867e725789a3346717b058092f4c089408a38f56b00d52ce6c7051c5"} Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.981671 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7eb932c6-138e-44fc-b382-6e702ea9d39b","Type":"ContainerStarted","Data":"ea84f5ff3da828c6b731e2eef036270df2fd08e8caaa8acb6bb64476a7d6107c"} Feb 18 00:51:28 crc kubenswrapper[4858]: I0218 00:51:28.982785 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" event={"ID":"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14","Type":"ContainerStarted","Data":"0db220df6f2f5a4e6b4e2bc6d73d1a2f3c6b10e65a021d0b5dc9588ef69c1f7b"} Feb 18 00:51:29 crc kubenswrapper[4858]: I0218 00:51:29.003647 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 18 00:51:29 crc kubenswrapper[4858]: I0218 00:51:29.431724 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dc7085a-564b-462e-8853-0c15a3d00f66" path="/var/lib/kubelet/pods/1dc7085a-564b-462e-8853-0c15a3d00f66/volumes" Feb 18 00:51:29 crc kubenswrapper[4858]: I0218 00:51:29.441211 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9befe7db-2687-4b07-ab13-9763231c95c3" path="/var/lib/kubelet/pods/9befe7db-2687-4b07-ab13-9763231c95c3/volumes" Feb 18 00:51:29 crc kubenswrapper[4858]: I0218 00:51:29.993147 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"284a610d-47d0-4f89-925c-c28aabef77e0","Type":"ContainerStarted","Data":"bd5d2d1b8671ac97d9ab889cc996d9ef35c856b27003229bf2ed472a9fc750c5"} Feb 18 00:51:29 crc kubenswrapper[4858]: I0218 00:51:29.994160 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"abc34ee9-ce6b-404e-b4d0-bd6211a3bc72","Type":"ContainerStarted","Data":"b8f3757f3ea7727394c56f6a55bebc8097829fef44d0fc5a7aaaf71fd38a7bd8"} Feb 18 00:51:29 crc kubenswrapper[4858]: I0218 00:51:29.994852 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"af8bc938-e065-4d61-9abe-62806f59470d","Type":"ContainerStarted","Data":"8f302c559edbc55a4c8ced8bb41b973646bfb36c0c9edcaa545dd37e245a3f3b"} Feb 18 00:51:29 crc kubenswrapper[4858]: I0218 00:51:29.996296 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"c716bb3e-01b1-4bc7-a9a2-4604faf684f0","Type":"ContainerStarted","Data":"9aac5a9ff31eeb274ddf19e6ecf4aa5b940f2684385472792503f10b877b0f23"} Feb 18 00:51:30 crc kubenswrapper[4858]: E0218 00:51:29.999894 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-querier\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" podUID="8cb4efd7-58cc-48fa-8d37-cd5d97add16c" Feb 18 00:51:34 crc kubenswrapper[4858]: I0218 00:51:34.216751 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-nsq62" Feb 18 00:51:34 crc kubenswrapper[4858]: I0218 00:51:34.501652 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-7gzrt" Feb 18 00:51:34 crc kubenswrapper[4858]: I0218 00:51:34.554283 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nsq62"] Feb 18 00:51:35 crc kubenswrapper[4858]: I0218 00:51:35.046014 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-nsq62" podUID="88738498-130e-438a-a822-f9946add222c" containerName="dnsmasq-dns" containerID="cri-o://de58294032ead23981559bf6fc7570e4f2389647725697c8bd5eabbe6325ead1" gracePeriod=10 Feb 18 00:51:36 crc kubenswrapper[4858]: I0218 00:51:36.057107 4858 generic.go:334] "Generic (PLEG): container finished" podID="88738498-130e-438a-a822-f9946add222c" containerID="de58294032ead23981559bf6fc7570e4f2389647725697c8bd5eabbe6325ead1" exitCode=0 Feb 18 00:51:36 crc kubenswrapper[4858]: I0218 00:51:36.057172 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-nsq62" event={"ID":"88738498-130e-438a-a822-f9946add222c","Type":"ContainerDied","Data":"de58294032ead23981559bf6fc7570e4f2389647725697c8bd5eabbe6325ead1"} Feb 18 00:51:40 crc kubenswrapper[4858]: E0218 00:51:40.613234 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981" Feb 18 00:51:40 crc kubenswrapper[4858]: E0218 00:51:40.614171 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-distributor,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=distributor -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-distributor-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-distributor-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9mvd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-distributor-585d9bcbc-6mvr5_openstack(0117af9e-cf65-489b-80f0-8f8c449baf92): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 00:51:40 crc kubenswrapper[4858]: E0218 00:51:40.615404 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-distributor\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5" podUID="0117af9e-cf65-489b-80f0-8f8c449baf92" Feb 18 00:51:40 crc kubenswrapper[4858]: E0218 00:51:40.655217 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981" Feb 18 00:51:40 crc kubenswrapper[4858]: E0218 00:51:40.655397 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-query-frontend,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=query-frontend -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-query-frontend-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-query-frontend-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cgr6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9_openstack(a78eeeda-46f2-4d10-b160-97d477d1d80e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 00:51:40 crc kubenswrapper[4858]: E0218 00:51:40.656740 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-query-frontend\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9" podUID="a78eeeda-46f2-4d10-b160-97d477d1d80e" Feb 18 00:51:40 crc kubenswrapper[4858]: E0218 00:51:40.672137 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 18 00:51:40 crc kubenswrapper[4858]: E0218 00:51:40.672280 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pg4f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(acb8b920-9bb7-42b7-8bf7-e8f6b5880654): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 00:51:40 crc kubenswrapper[4858]: E0218 00:51:40.673486 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="acb8b920-9bb7-42b7-8bf7-e8f6b5880654" Feb 18 00:51:40 crc kubenswrapper[4858]: I0218 00:51:40.751048 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-nsq62" Feb 18 00:51:40 crc kubenswrapper[4858]: I0218 00:51:40.816052 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88738498-130e-438a-a822-f9946add222c-dns-svc\") pod \"88738498-130e-438a-a822-f9946add222c\" (UID: \"88738498-130e-438a-a822-f9946add222c\") " Feb 18 00:51:40 crc kubenswrapper[4858]: I0218 00:51:40.816137 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvqmm\" (UniqueName: \"kubernetes.io/projected/88738498-130e-438a-a822-f9946add222c-kube-api-access-bvqmm\") pod \"88738498-130e-438a-a822-f9946add222c\" (UID: \"88738498-130e-438a-a822-f9946add222c\") " Feb 18 00:51:40 crc kubenswrapper[4858]: I0218 00:51:40.816341 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88738498-130e-438a-a822-f9946add222c-config\") pod \"88738498-130e-438a-a822-f9946add222c\" (UID: \"88738498-130e-438a-a822-f9946add222c\") " Feb 18 00:51:40 crc kubenswrapper[4858]: I0218 00:51:40.822867 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88738498-130e-438a-a822-f9946add222c-kube-api-access-bvqmm" (OuterVolumeSpecName: "kube-api-access-bvqmm") pod "88738498-130e-438a-a822-f9946add222c" (UID: "88738498-130e-438a-a822-f9946add222c"). InnerVolumeSpecName "kube-api-access-bvqmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:51:40 crc kubenswrapper[4858]: I0218 00:51:40.853062 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88738498-130e-438a-a822-f9946add222c-config" (OuterVolumeSpecName: "config") pod "88738498-130e-438a-a822-f9946add222c" (UID: "88738498-130e-438a-a822-f9946add222c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:51:40 crc kubenswrapper[4858]: I0218 00:51:40.870687 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88738498-130e-438a-a822-f9946add222c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "88738498-130e-438a-a822-f9946add222c" (UID: "88738498-130e-438a-a822-f9946add222c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:51:40 crc kubenswrapper[4858]: I0218 00:51:40.918927 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvqmm\" (UniqueName: \"kubernetes.io/projected/88738498-130e-438a-a822-f9946add222c-kube-api-access-bvqmm\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:40 crc kubenswrapper[4858]: I0218 00:51:40.918974 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88738498-130e-438a-a822-f9946add222c-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:40 crc kubenswrapper[4858]: I0218 00:51:40.918988 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88738498-130e-438a-a822-f9946add222c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:41 crc kubenswrapper[4858]: I0218 00:51:41.102864 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-nsq62" event={"ID":"88738498-130e-438a-a822-f9946add222c","Type":"ContainerDied","Data":"555f15e387b22570a3f1894e47cf3eab5f091dd1b409de299e5117d98a44975d"} Feb 18 00:51:41 crc kubenswrapper[4858]: I0218 00:51:41.102912 4858 scope.go:117] "RemoveContainer" containerID="de58294032ead23981559bf6fc7570e4f2389647725697c8bd5eabbe6325ead1" Feb 18 00:51:41 crc kubenswrapper[4858]: I0218 00:51:41.103008 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-nsq62" Feb 18 00:51:41 crc kubenswrapper[4858]: E0218 00:51:41.105300 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-distributor\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5" podUID="0117af9e-cf65-489b-80f0-8f8c449baf92" Feb 18 00:51:41 crc kubenswrapper[4858]: E0218 00:51:41.106461 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="acb8b920-9bb7-42b7-8bf7-e8f6b5880654" Feb 18 00:51:41 crc kubenswrapper[4858]: E0218 00:51:41.116909 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-query-frontend\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9" podUID="a78eeeda-46f2-4d10-b160-97d477d1d80e" Feb 18 00:51:41 crc kubenswrapper[4858]: I0218 00:51:41.172734 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nsq62"] Feb 18 00:51:41 crc kubenswrapper[4858]: I0218 00:51:41.178015 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nsq62"] Feb 18 00:51:41 crc kubenswrapper[4858]: I0218 00:51:41.431098 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88738498-130e-438a-a822-f9946add222c" path="/var/lib/kubelet/pods/88738498-130e-438a-a822-f9946add222c/volumes" Feb 18 00:51:41 crc kubenswrapper[4858]: E0218 00:51:41.550172 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified" Feb 18 00:51:41 crc kubenswrapper[4858]: E0218 00:51:41.550659 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n54chddh66dh589h598h646h664h5f6h585h648hbhbh67hbh656hd7h698h9bh5f8hf5h8fh58bh94h6fh5bch5c5h58h68ch666h9fh5bdh54q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nsg4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(7eb932c6-138e-44fc-b382-6e702ea9d39b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 00:51:42 crc kubenswrapper[4858]: E0218 00:51:42.075561 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Feb 18 00:51:42 crc kubenswrapper[4858]: E0218 00:51:42.075753 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n65bh55h547h5b5hd6h59fh8h555h5b7hf5h5cchdch696hcbh685h67h56h65fh594h97hch5ffh7dh54h85h5f6h685hbdhb7h67ch57bh56dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-47p2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-fvnsh_openstack(19953a4a-b2c2-42f5-a48b-a217cf7b7ab0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 00:51:42 crc kubenswrapper[4858]: E0218 00:51:42.076977 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-fvnsh" podUID="19953a4a-b2c2-42f5-a48b-a217cf7b7ab0" Feb 18 00:51:42 crc kubenswrapper[4858]: E0218 00:51:42.111339 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-fvnsh" podUID="19953a4a-b2c2-42f5-a48b-a217cf7b7ab0" Feb 18 00:51:42 crc kubenswrapper[4858]: I0218 00:51:42.454560 4858 scope.go:117] "RemoveContainer" containerID="0d10afc14d76c0937c04c7e03706d601e828f76a3dc3f6e2cf423b26916e9e60" Feb 18 00:51:43 crc kubenswrapper[4858]: E0218 00:51:43.202617 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 18 00:51:43 crc kubenswrapper[4858]: E0218 00:51:43.202681 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 18 00:51:43 crc kubenswrapper[4858]: E0218 00:51:43.202847 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9jsbl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(9788397b-0bb7-43f9-9ac8-69b765750ecb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 00:51:43 crc kubenswrapper[4858]: E0218 00:51:43.204362 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="9788397b-0bb7-43f9-9ac8-69b765750ecb" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.116067 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-xxbqx"] Feb 18 00:51:44 crc kubenswrapper[4858]: E0218 00:51:44.116585 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88738498-130e-438a-a822-f9946add222c" containerName="init" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.116613 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="88738498-130e-438a-a822-f9946add222c" containerName="init" Feb 18 00:51:44 crc kubenswrapper[4858]: E0218 00:51:44.116644 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88738498-130e-438a-a822-f9946add222c" containerName="dnsmasq-dns" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.116655 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="88738498-130e-438a-a822-f9946add222c" containerName="dnsmasq-dns" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.116917 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="88738498-130e-438a-a822-f9946add222c" containerName="dnsmasq-dns" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.117870 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xxbqx" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.119843 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 18 00:51:44 crc kubenswrapper[4858]: E0218 00:51:44.131131 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="9788397b-0bb7-43f9-9ac8-69b765750ecb" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.132476 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xxbqx"] Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.183815 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9kzr\" (UniqueName: \"kubernetes.io/projected/b624e2b4-b51c-424d-9e84-adc1286475e7-kube-api-access-h9kzr\") pod \"ovn-controller-metrics-xxbqx\" (UID: \"b624e2b4-b51c-424d-9e84-adc1286475e7\") " pod="openstack/ovn-controller-metrics-xxbqx" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.183920 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b624e2b4-b51c-424d-9e84-adc1286475e7-config\") pod \"ovn-controller-metrics-xxbqx\" (UID: \"b624e2b4-b51c-424d-9e84-adc1286475e7\") " pod="openstack/ovn-controller-metrics-xxbqx" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.184072 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b624e2b4-b51c-424d-9e84-adc1286475e7-combined-ca-bundle\") pod \"ovn-controller-metrics-xxbqx\" (UID: \"b624e2b4-b51c-424d-9e84-adc1286475e7\") " pod="openstack/ovn-controller-metrics-xxbqx" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.184150 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b624e2b4-b51c-424d-9e84-adc1286475e7-ovn-rundir\") pod \"ovn-controller-metrics-xxbqx\" (UID: \"b624e2b4-b51c-424d-9e84-adc1286475e7\") " pod="openstack/ovn-controller-metrics-xxbqx" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.184203 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b624e2b4-b51c-424d-9e84-adc1286475e7-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xxbqx\" (UID: \"b624e2b4-b51c-424d-9e84-adc1286475e7\") " pod="openstack/ovn-controller-metrics-xxbqx" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.184477 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b624e2b4-b51c-424d-9e84-adc1286475e7-ovs-rundir\") pod \"ovn-controller-metrics-xxbqx\" (UID: \"b624e2b4-b51c-424d-9e84-adc1286475e7\") " pod="openstack/ovn-controller-metrics-xxbqx" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.216937 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-nsq62" podUID="88738498-130e-438a-a822-f9946add222c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.106:5353: i/o timeout" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.286353 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b624e2b4-b51c-424d-9e84-adc1286475e7-ovs-rundir\") pod \"ovn-controller-metrics-xxbqx\" (UID: \"b624e2b4-b51c-424d-9e84-adc1286475e7\") " pod="openstack/ovn-controller-metrics-xxbqx" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.286405 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9kzr\" (UniqueName: \"kubernetes.io/projected/b624e2b4-b51c-424d-9e84-adc1286475e7-kube-api-access-h9kzr\") pod \"ovn-controller-metrics-xxbqx\" (UID: \"b624e2b4-b51c-424d-9e84-adc1286475e7\") " pod="openstack/ovn-controller-metrics-xxbqx" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.286447 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b624e2b4-b51c-424d-9e84-adc1286475e7-config\") pod \"ovn-controller-metrics-xxbqx\" (UID: \"b624e2b4-b51c-424d-9e84-adc1286475e7\") " pod="openstack/ovn-controller-metrics-xxbqx" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.286516 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b624e2b4-b51c-424d-9e84-adc1286475e7-combined-ca-bundle\") pod \"ovn-controller-metrics-xxbqx\" (UID: \"b624e2b4-b51c-424d-9e84-adc1286475e7\") " pod="openstack/ovn-controller-metrics-xxbqx" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.286541 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b624e2b4-b51c-424d-9e84-adc1286475e7-ovn-rundir\") pod \"ovn-controller-metrics-xxbqx\" (UID: \"b624e2b4-b51c-424d-9e84-adc1286475e7\") " pod="openstack/ovn-controller-metrics-xxbqx" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.286565 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b624e2b4-b51c-424d-9e84-adc1286475e7-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xxbqx\" (UID: \"b624e2b4-b51c-424d-9e84-adc1286475e7\") " pod="openstack/ovn-controller-metrics-xxbqx" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.286746 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b624e2b4-b51c-424d-9e84-adc1286475e7-ovs-rundir\") pod \"ovn-controller-metrics-xxbqx\" (UID: \"b624e2b4-b51c-424d-9e84-adc1286475e7\") " pod="openstack/ovn-controller-metrics-xxbqx" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.286944 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b624e2b4-b51c-424d-9e84-adc1286475e7-ovn-rundir\") pod \"ovn-controller-metrics-xxbqx\" (UID: \"b624e2b4-b51c-424d-9e84-adc1286475e7\") " pod="openstack/ovn-controller-metrics-xxbqx" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.287287 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b624e2b4-b51c-424d-9e84-adc1286475e7-config\") pod \"ovn-controller-metrics-xxbqx\" (UID: \"b624e2b4-b51c-424d-9e84-adc1286475e7\") " pod="openstack/ovn-controller-metrics-xxbqx" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.287344 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-5tqpr"] Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.288624 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-5tqpr" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.290837 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.292458 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b624e2b4-b51c-424d-9e84-adc1286475e7-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xxbqx\" (UID: \"b624e2b4-b51c-424d-9e84-adc1286475e7\") " pod="openstack/ovn-controller-metrics-xxbqx" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.303603 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b624e2b4-b51c-424d-9e84-adc1286475e7-combined-ca-bundle\") pod \"ovn-controller-metrics-xxbqx\" (UID: \"b624e2b4-b51c-424d-9e84-adc1286475e7\") " pod="openstack/ovn-controller-metrics-xxbqx" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.306438 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9kzr\" (UniqueName: \"kubernetes.io/projected/b624e2b4-b51c-424d-9e84-adc1286475e7-kube-api-access-h9kzr\") pod \"ovn-controller-metrics-xxbqx\" (UID: \"b624e2b4-b51c-424d-9e84-adc1286475e7\") " pod="openstack/ovn-controller-metrics-xxbqx" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.363168 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-5tqpr"] Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.387808 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c4rc\" (UniqueName: \"kubernetes.io/projected/30ad311b-25d3-4c52-850a-c3ef52ef934d-kube-api-access-8c4rc\") pod \"dnsmasq-dns-7fd796d7df-5tqpr\" (UID: \"30ad311b-25d3-4c52-850a-c3ef52ef934d\") " pod="openstack/dnsmasq-dns-7fd796d7df-5tqpr" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.387954 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30ad311b-25d3-4c52-850a-c3ef52ef934d-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-5tqpr\" (UID: \"30ad311b-25d3-4c52-850a-c3ef52ef934d\") " pod="openstack/dnsmasq-dns-7fd796d7df-5tqpr" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.388009 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30ad311b-25d3-4c52-850a-c3ef52ef934d-config\") pod \"dnsmasq-dns-7fd796d7df-5tqpr\" (UID: \"30ad311b-25d3-4c52-850a-c3ef52ef934d\") " pod="openstack/dnsmasq-dns-7fd796d7df-5tqpr" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.388034 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30ad311b-25d3-4c52-850a-c3ef52ef934d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-5tqpr\" (UID: \"30ad311b-25d3-4c52-850a-c3ef52ef934d\") " pod="openstack/dnsmasq-dns-7fd796d7df-5tqpr" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.445525 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xxbqx" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.490156 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30ad311b-25d3-4c52-850a-c3ef52ef934d-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-5tqpr\" (UID: \"30ad311b-25d3-4c52-850a-c3ef52ef934d\") " pod="openstack/dnsmasq-dns-7fd796d7df-5tqpr" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.490253 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30ad311b-25d3-4c52-850a-c3ef52ef934d-config\") pod \"dnsmasq-dns-7fd796d7df-5tqpr\" (UID: \"30ad311b-25d3-4c52-850a-c3ef52ef934d\") " pod="openstack/dnsmasq-dns-7fd796d7df-5tqpr" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.490280 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30ad311b-25d3-4c52-850a-c3ef52ef934d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-5tqpr\" (UID: \"30ad311b-25d3-4c52-850a-c3ef52ef934d\") " pod="openstack/dnsmasq-dns-7fd796d7df-5tqpr" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.490355 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c4rc\" (UniqueName: \"kubernetes.io/projected/30ad311b-25d3-4c52-850a-c3ef52ef934d-kube-api-access-8c4rc\") pod \"dnsmasq-dns-7fd796d7df-5tqpr\" (UID: \"30ad311b-25d3-4c52-850a-c3ef52ef934d\") " pod="openstack/dnsmasq-dns-7fd796d7df-5tqpr" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.492099 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30ad311b-25d3-4c52-850a-c3ef52ef934d-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-5tqpr\" (UID: \"30ad311b-25d3-4c52-850a-c3ef52ef934d\") " pod="openstack/dnsmasq-dns-7fd796d7df-5tqpr" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.492958 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30ad311b-25d3-4c52-850a-c3ef52ef934d-config\") pod \"dnsmasq-dns-7fd796d7df-5tqpr\" (UID: \"30ad311b-25d3-4c52-850a-c3ef52ef934d\") " pod="openstack/dnsmasq-dns-7fd796d7df-5tqpr" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.496511 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30ad311b-25d3-4c52-850a-c3ef52ef934d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-5tqpr\" (UID: \"30ad311b-25d3-4c52-850a-c3ef52ef934d\") " pod="openstack/dnsmasq-dns-7fd796d7df-5tqpr" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.499879 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-5tqpr"] Feb 18 00:51:44 crc kubenswrapper[4858]: E0218 00:51:44.500631 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-8c4rc], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7fd796d7df-5tqpr" podUID="30ad311b-25d3-4c52-850a-c3ef52ef934d" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.514148 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c4rc\" (UniqueName: \"kubernetes.io/projected/30ad311b-25d3-4c52-850a-c3ef52ef934d-kube-api-access-8c4rc\") pod \"dnsmasq-dns-7fd796d7df-5tqpr\" (UID: \"30ad311b-25d3-4c52-850a-c3ef52ef934d\") " pod="openstack/dnsmasq-dns-7fd796d7df-5tqpr" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.520982 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-f8p59"] Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.522765 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.531776 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.550782 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-f8p59"] Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.591954 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60616614-0eb3-4b32-8ccd-1164a699b407-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-f8p59\" (UID: \"60616614-0eb3-4b32-8ccd-1164a699b407\") " pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.592048 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60616614-0eb3-4b32-8ccd-1164a699b407-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-f8p59\" (UID: \"60616614-0eb3-4b32-8ccd-1164a699b407\") " pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.592118 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xdnt\" (UniqueName: \"kubernetes.io/projected/60616614-0eb3-4b32-8ccd-1164a699b407-kube-api-access-8xdnt\") pod \"dnsmasq-dns-86db49b7ff-f8p59\" (UID: \"60616614-0eb3-4b32-8ccd-1164a699b407\") " pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.592157 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60616614-0eb3-4b32-8ccd-1164a699b407-config\") pod \"dnsmasq-dns-86db49b7ff-f8p59\" (UID: \"60616614-0eb3-4b32-8ccd-1164a699b407\") " pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.592176 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60616614-0eb3-4b32-8ccd-1164a699b407-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-f8p59\" (UID: \"60616614-0eb3-4b32-8ccd-1164a699b407\") " pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.693920 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xdnt\" (UniqueName: \"kubernetes.io/projected/60616614-0eb3-4b32-8ccd-1164a699b407-kube-api-access-8xdnt\") pod \"dnsmasq-dns-86db49b7ff-f8p59\" (UID: \"60616614-0eb3-4b32-8ccd-1164a699b407\") " pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.693991 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60616614-0eb3-4b32-8ccd-1164a699b407-config\") pod \"dnsmasq-dns-86db49b7ff-f8p59\" (UID: \"60616614-0eb3-4b32-8ccd-1164a699b407\") " pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.694015 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60616614-0eb3-4b32-8ccd-1164a699b407-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-f8p59\" (UID: \"60616614-0eb3-4b32-8ccd-1164a699b407\") " pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.694046 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60616614-0eb3-4b32-8ccd-1164a699b407-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-f8p59\" (UID: \"60616614-0eb3-4b32-8ccd-1164a699b407\") " pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.694111 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60616614-0eb3-4b32-8ccd-1164a699b407-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-f8p59\" (UID: \"60616614-0eb3-4b32-8ccd-1164a699b407\") " pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.695035 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60616614-0eb3-4b32-8ccd-1164a699b407-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-f8p59\" (UID: \"60616614-0eb3-4b32-8ccd-1164a699b407\") " pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.695298 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60616614-0eb3-4b32-8ccd-1164a699b407-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-f8p59\" (UID: \"60616614-0eb3-4b32-8ccd-1164a699b407\") " pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.695456 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60616614-0eb3-4b32-8ccd-1164a699b407-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-f8p59\" (UID: \"60616614-0eb3-4b32-8ccd-1164a699b407\") " pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.695999 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60616614-0eb3-4b32-8ccd-1164a699b407-config\") pod \"dnsmasq-dns-86db49b7ff-f8p59\" (UID: \"60616614-0eb3-4b32-8ccd-1164a699b407\") " pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.718245 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xdnt\" (UniqueName: \"kubernetes.io/projected/60616614-0eb3-4b32-8ccd-1164a699b407-kube-api-access-8xdnt\") pod \"dnsmasq-dns-86db49b7ff-f8p59\" (UID: \"60616614-0eb3-4b32-8ccd-1164a699b407\") " pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" Feb 18 00:51:44 crc kubenswrapper[4858]: I0218 00:51:44.856689 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" Feb 18 00:51:45 crc kubenswrapper[4858]: I0218 00:51:45.143862 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-5tqpr" Feb 18 00:51:45 crc kubenswrapper[4858]: I0218 00:51:45.154065 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-5tqpr" Feb 18 00:51:45 crc kubenswrapper[4858]: I0218 00:51:45.303556 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30ad311b-25d3-4c52-850a-c3ef52ef934d-dns-svc\") pod \"30ad311b-25d3-4c52-850a-c3ef52ef934d\" (UID: \"30ad311b-25d3-4c52-850a-c3ef52ef934d\") " Feb 18 00:51:45 crc kubenswrapper[4858]: I0218 00:51:45.303928 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30ad311b-25d3-4c52-850a-c3ef52ef934d-config\") pod \"30ad311b-25d3-4c52-850a-c3ef52ef934d\" (UID: \"30ad311b-25d3-4c52-850a-c3ef52ef934d\") " Feb 18 00:51:45 crc kubenswrapper[4858]: I0218 00:51:45.304030 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30ad311b-25d3-4c52-850a-c3ef52ef934d-ovsdbserver-nb\") pod \"30ad311b-25d3-4c52-850a-c3ef52ef934d\" (UID: \"30ad311b-25d3-4c52-850a-c3ef52ef934d\") " Feb 18 00:51:45 crc kubenswrapper[4858]: I0218 00:51:45.304033 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30ad311b-25d3-4c52-850a-c3ef52ef934d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "30ad311b-25d3-4c52-850a-c3ef52ef934d" (UID: "30ad311b-25d3-4c52-850a-c3ef52ef934d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:51:45 crc kubenswrapper[4858]: I0218 00:51:45.304060 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c4rc\" (UniqueName: \"kubernetes.io/projected/30ad311b-25d3-4c52-850a-c3ef52ef934d-kube-api-access-8c4rc\") pod \"30ad311b-25d3-4c52-850a-c3ef52ef934d\" (UID: \"30ad311b-25d3-4c52-850a-c3ef52ef934d\") " Feb 18 00:51:45 crc kubenswrapper[4858]: I0218 00:51:45.304364 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30ad311b-25d3-4c52-850a-c3ef52ef934d-config" (OuterVolumeSpecName: "config") pod "30ad311b-25d3-4c52-850a-c3ef52ef934d" (UID: "30ad311b-25d3-4c52-850a-c3ef52ef934d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:51:45 crc kubenswrapper[4858]: I0218 00:51:45.304378 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30ad311b-25d3-4c52-850a-c3ef52ef934d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:45 crc kubenswrapper[4858]: I0218 00:51:45.304419 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30ad311b-25d3-4c52-850a-c3ef52ef934d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "30ad311b-25d3-4c52-850a-c3ef52ef934d" (UID: "30ad311b-25d3-4c52-850a-c3ef52ef934d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:51:45 crc kubenswrapper[4858]: I0218 00:51:45.309441 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30ad311b-25d3-4c52-850a-c3ef52ef934d-kube-api-access-8c4rc" (OuterVolumeSpecName: "kube-api-access-8c4rc") pod "30ad311b-25d3-4c52-850a-c3ef52ef934d" (UID: "30ad311b-25d3-4c52-850a-c3ef52ef934d"). InnerVolumeSpecName "kube-api-access-8c4rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:51:45 crc kubenswrapper[4858]: I0218 00:51:45.405934 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30ad311b-25d3-4c52-850a-c3ef52ef934d-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:45 crc kubenswrapper[4858]: I0218 00:51:45.406166 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30ad311b-25d3-4c52-850a-c3ef52ef934d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:45 crc kubenswrapper[4858]: I0218 00:51:45.406177 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c4rc\" (UniqueName: \"kubernetes.io/projected/30ad311b-25d3-4c52-850a-c3ef52ef934d-kube-api-access-8c4rc\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:45 crc kubenswrapper[4858]: I0218 00:51:45.575656 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-f8p59"] Feb 18 00:51:45 crc kubenswrapper[4858]: W0218 00:51:45.615420 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60616614_0eb3_4b32_8ccd_1164a699b407.slice/crio-f68213d2b85a61579893551517e469247fb74b27fb97ebe0955d5d2e79edb282 WatchSource:0}: Error finding container f68213d2b85a61579893551517e469247fb74b27fb97ebe0955d5d2e79edb282: Status 404 returned error can't find the container with id f68213d2b85a61579893551517e469247fb74b27fb97ebe0955d5d2e79edb282 Feb 18 00:51:45 crc kubenswrapper[4858]: I0218 00:51:45.663085 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xxbqx"] Feb 18 00:51:46 crc kubenswrapper[4858]: I0218 00:51:46.151866 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xxbqx" event={"ID":"b624e2b4-b51c-424d-9e84-adc1286475e7","Type":"ContainerStarted","Data":"5eb7ac43ea26c1be80d1898d33d74dacd6816d1e9a9c1576655b9424bda2f32d"} Feb 18 00:51:46 crc kubenswrapper[4858]: I0218 00:51:46.153756 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"31807c8a-5224-4df1-a761-10031d623fa5","Type":"ContainerStarted","Data":"3ef18639b6b305e88e8fa93741c9d9ac5b6cbcba0a8c23333756a20bc64b1426"} Feb 18 00:51:46 crc kubenswrapper[4858]: I0218 00:51:46.153890 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 18 00:51:46 crc kubenswrapper[4858]: I0218 00:51:46.155263 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"c716bb3e-01b1-4bc7-a9a2-4604faf684f0","Type":"ContainerStarted","Data":"19857e84bdd3a7717f7a35373796c5541aae0d1dc2de54e5dd8884bcd29c6823"} Feb 18 00:51:46 crc kubenswrapper[4858]: I0218 00:51:46.155386 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:51:46 crc kubenswrapper[4858]: I0218 00:51:46.157066 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"675206cd-1619-4598-81a9-96b66d09ce88","Type":"ContainerStarted","Data":"dacce4e7cee3cf6c79792f28005fad891b8439866e078de90ac1c26888a44874"} Feb 18 00:51:46 crc kubenswrapper[4858]: I0218 00:51:46.159248 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qn9qf" event={"ID":"131eb8ce-e6be-487f-b698-370140a1a338","Type":"ContainerStarted","Data":"616d4355d3b5de0599eb49cd2d3c2cace9521968a1b306bce9b2c6f1a8dc1dc4"} Feb 18 00:51:46 crc kubenswrapper[4858]: I0218 00:51:46.160804 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" event={"ID":"60616614-0eb3-4b32-8ccd-1164a699b407","Type":"ContainerStarted","Data":"f68213d2b85a61579893551517e469247fb74b27fb97ebe0955d5d2e79edb282"} Feb 18 00:51:46 crc kubenswrapper[4858]: I0218 00:51:46.160809 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-5tqpr" Feb 18 00:51:46 crc kubenswrapper[4858]: I0218 00:51:46.176895 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=19.253209687000002 podStartE2EDuration="33.176862244s" podCreationTimestamp="2026-02-18 00:51:13 +0000 UTC" firstStartedPulling="2026-02-18 00:51:27.82005896 +0000 UTC m=+1041.125895692" lastFinishedPulling="2026-02-18 00:51:41.743711517 +0000 UTC m=+1055.049548249" observedRunningTime="2026-02-18 00:51:46.175762617 +0000 UTC m=+1059.481599349" watchObservedRunningTime="2026-02-18 00:51:46.176862244 +0000 UTC m=+1059.482698976" Feb 18 00:51:46 crc kubenswrapper[4858]: I0218 00:51:46.207630 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-ingester-0" podStartSLOduration=6.931873373 podStartE2EDuration="20.207610164s" podCreationTimestamp="2026-02-18 00:51:26 +0000 UTC" firstStartedPulling="2026-02-18 00:51:29.372073534 +0000 UTC m=+1042.677910266" lastFinishedPulling="2026-02-18 00:51:42.647810325 +0000 UTC m=+1055.953647057" observedRunningTime="2026-02-18 00:51:46.202388786 +0000 UTC m=+1059.508225518" watchObservedRunningTime="2026-02-18 00:51:46.207610164 +0000 UTC m=+1059.513446886" Feb 18 00:51:46 crc kubenswrapper[4858]: I0218 00:51:46.256151 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-5tqpr"] Feb 18 00:51:46 crc kubenswrapper[4858]: I0218 00:51:46.261468 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-5tqpr"] Feb 18 00:51:46 crc kubenswrapper[4858]: E0218 00:51:46.783200 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="7eb932c6-138e-44fc-b382-6e702ea9d39b" Feb 18 00:51:47 crc kubenswrapper[4858]: I0218 00:51:47.173804 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7eb932c6-138e-44fc-b382-6e702ea9d39b","Type":"ContainerStarted","Data":"6023fb75177fa5f1f2d8258ee2996772acaaf177d8f053cd3e75f0280bef7c9c"} Feb 18 00:51:47 crc kubenswrapper[4858]: E0218 00:51:47.179801 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="7eb932c6-138e-44fc-b382-6e702ea9d39b" Feb 18 00:51:47 crc kubenswrapper[4858]: I0218 00:51:47.186867 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" event={"ID":"ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14","Type":"ContainerStarted","Data":"0e11fda9e8fe47714ef67372622d4df41f2dfafe43222d18373e1b8e2d6f8390"} Feb 18 00:51:47 crc kubenswrapper[4858]: I0218 00:51:47.187571 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:47 crc kubenswrapper[4858]: I0218 00:51:47.193816 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"abc34ee9-ce6b-404e-b4d0-bd6211a3bc72","Type":"ContainerStarted","Data":"7c34b00329dcd4309ccabebe76fe60a20b6de2bb01f95d674decd542741d1a9e"} Feb 18 00:51:47 crc kubenswrapper[4858]: I0218 00:51:47.194840 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 18 00:51:47 crc kubenswrapper[4858]: I0218 00:51:47.196429 4858 generic.go:334] "Generic (PLEG): container finished" podID="60616614-0eb3-4b32-8ccd-1164a699b407" containerID="1585e2231134895bf6d21461e0ce8c8f7218ea273069f62079ef736b0cef8e39" exitCode=0 Feb 18 00:51:47 crc kubenswrapper[4858]: I0218 00:51:47.196488 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" event={"ID":"60616614-0eb3-4b32-8ccd-1164a699b407","Type":"ContainerDied","Data":"1585e2231134895bf6d21461e0ce8c8f7218ea273069f62079ef736b0cef8e39"} Feb 18 00:51:47 crc kubenswrapper[4858]: I0218 00:51:47.221871 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"af8bc938-e065-4d61-9abe-62806f59470d","Type":"ContainerStarted","Data":"558e1a739b9a06ed1e7ac4af05543baa75c3a45d18e445c9af43247330ba5677"} Feb 18 00:51:47 crc kubenswrapper[4858]: I0218 00:51:47.223933 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" event={"ID":"8cb4efd7-58cc-48fa-8d37-cd5d97add16c","Type":"ContainerStarted","Data":"74e98a3b88fc285f6733d7238c1343604a903d4c27bd045e2b9311c505e15615"} Feb 18 00:51:47 crc kubenswrapper[4858]: I0218 00:51:47.224600 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" Feb 18 00:51:47 crc kubenswrapper[4858]: I0218 00:51:47.226214 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xxbqx" event={"ID":"b624e2b4-b51c-424d-9e84-adc1286475e7","Type":"ContainerStarted","Data":"3eecd7711fddc175be7f82b86c410614174ea732cf930b576079bb69e87b404b"} Feb 18 00:51:47 crc kubenswrapper[4858]: I0218 00:51:47.229896 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" event={"ID":"7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c","Type":"ContainerStarted","Data":"20a91e4adda8581e33ae511895ffac33eaca72aab1dc8f4f1015280cf5289db3"} Feb 18 00:51:47 crc kubenswrapper[4858]: I0218 00:51:47.231570 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:47 crc kubenswrapper[4858]: I0218 00:51:47.254722 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" podStartSLOduration=7.510339466 podStartE2EDuration="21.254706199s" podCreationTimestamp="2026-02-18 00:51:26 +0000 UTC" firstStartedPulling="2026-02-18 00:51:28.710388502 +0000 UTC m=+1042.016225234" lastFinishedPulling="2026-02-18 00:51:42.454755235 +0000 UTC m=+1055.760591967" observedRunningTime="2026-02-18 00:51:47.252126247 +0000 UTC m=+1060.557962979" watchObservedRunningTime="2026-02-18 00:51:47.254706199 +0000 UTC m=+1060.560542921" Feb 18 00:51:47 crc kubenswrapper[4858]: I0218 00:51:47.266245 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"69e5abda-5efa-402f-b66c-320cf6ed1d99","Type":"ContainerStarted","Data":"40d4aaa45e6ba882acc9a8bd3e383cf4d4c226697170031564364f0f2f0a2c84"} Feb 18 00:51:47 crc kubenswrapper[4858]: I0218 00:51:47.270054 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vtwxf" Feb 18 00:51:47 crc kubenswrapper[4858]: I0218 00:51:47.286401 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a845f908-18e9-47e2-bc4f-01308c8a69b3","Type":"ContainerStarted","Data":"9a9161a7d29c65c61ab452df26272597ae23d3b48ab38e3992ba258689d0bae5"} Feb 18 00:51:47 crc kubenswrapper[4858]: I0218 00:51:47.322830 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-index-gateway-0" podStartSLOduration=8.059987845 podStartE2EDuration="21.322811991s" podCreationTimestamp="2026-02-18 00:51:26 +0000 UTC" firstStartedPulling="2026-02-18 00:51:29.373514039 +0000 UTC m=+1042.679350771" lastFinishedPulling="2026-02-18 00:51:42.636338185 +0000 UTC m=+1055.942174917" observedRunningTime="2026-02-18 00:51:47.286814483 +0000 UTC m=+1060.592651215" watchObservedRunningTime="2026-02-18 00:51:47.322811991 +0000 UTC m=+1060.628648723" Feb 18 00:51:47 crc kubenswrapper[4858]: I0218 00:51:47.414447 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-xxbqx" podStartSLOduration=3.414431976 podStartE2EDuration="3.414431976s" podCreationTimestamp="2026-02-18 00:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:51:47.411767002 +0000 UTC m=+1060.717603734" watchObservedRunningTime="2026-02-18 00:51:47.414431976 +0000 UTC m=+1060.720268708" Feb 18 00:51:47 crc kubenswrapper[4858]: I0218 00:51:47.464769 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30ad311b-25d3-4c52-850a-c3ef52ef934d" path="/var/lib/kubelet/pods/30ad311b-25d3-4c52-850a-c3ef52ef934d/volumes" Feb 18 00:51:47 crc kubenswrapper[4858]: I0218 00:51:47.465332 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" Feb 18 00:51:47 crc kubenswrapper[4858]: I0218 00:51:47.498326 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" podStartSLOduration=-9223372015.356466 podStartE2EDuration="21.498309573s" podCreationTimestamp="2026-02-18 00:51:26 +0000 UTC" firstStartedPulling="2026-02-18 00:51:28.148130494 +0000 UTC m=+1041.453967226" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:51:47.484671341 +0000 UTC m=+1060.790508063" watchObservedRunningTime="2026-02-18 00:51:47.498309573 +0000 UTC m=+1060.804146305" Feb 18 00:51:47 crc kubenswrapper[4858]: I0218 00:51:47.565223 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-755l8" podStartSLOduration=7.229679428 podStartE2EDuration="21.565207055s" podCreationTimestamp="2026-02-18 00:51:26 +0000 UTC" firstStartedPulling="2026-02-18 00:51:28.317560907 +0000 UTC m=+1041.623397639" lastFinishedPulling="2026-02-18 00:51:42.653088524 +0000 UTC m=+1055.958925266" observedRunningTime="2026-02-18 00:51:47.551914571 +0000 UTC m=+1060.857751303" watchObservedRunningTime="2026-02-18 00:51:47.565207055 +0000 UTC m=+1060.871043787" Feb 18 00:51:48 crc kubenswrapper[4858]: I0218 00:51:48.292575 4858 generic.go:334] "Generic (PLEG): container finished" podID="131eb8ce-e6be-487f-b698-370140a1a338" containerID="616d4355d3b5de0599eb49cd2d3c2cace9521968a1b306bce9b2c6f1a8dc1dc4" exitCode=0 Feb 18 00:51:48 crc kubenswrapper[4858]: I0218 00:51:48.292685 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qn9qf" event={"ID":"131eb8ce-e6be-487f-b698-370140a1a338","Type":"ContainerDied","Data":"616d4355d3b5de0599eb49cd2d3c2cace9521968a1b306bce9b2c6f1a8dc1dc4"} Feb 18 00:51:48 crc kubenswrapper[4858]: I0218 00:51:48.295021 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a53fffdd-3f92-4632-8391-cc89792884a8","Type":"ContainerStarted","Data":"d2851682fe6c25612d81583590c07588ee0a134c25c499865ea34610e1d5d805"} Feb 18 00:51:48 crc kubenswrapper[4858]: I0218 00:51:48.296518 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"284a610d-47d0-4f89-925c-c28aabef77e0","Type":"ContainerStarted","Data":"89a2c306abc0f09078568365e6642a191734ca18ff0e88e63856d8bbc8433d4e"} Feb 18 00:51:48 crc kubenswrapper[4858]: I0218 00:51:48.296609 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 18 00:51:48 crc kubenswrapper[4858]: I0218 00:51:48.298220 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" event={"ID":"60616614-0eb3-4b32-8ccd-1164a699b407","Type":"ContainerStarted","Data":"ecc11f06ea381eaf1af315380d1d40785000b6435e111122d89cd0f62add2c0b"} Feb 18 00:51:48 crc kubenswrapper[4858]: I0218 00:51:48.298358 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" Feb 18 00:51:48 crc kubenswrapper[4858]: I0218 00:51:48.300897 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"af8bc938-e065-4d61-9abe-62806f59470d","Type":"ContainerStarted","Data":"625549dac3318d801703a2e7298a9767ce18dd8f1e781b392cdfe287a8dc248f"} Feb 18 00:51:48 crc kubenswrapper[4858]: E0218 00:51:48.308184 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="7eb932c6-138e-44fc-b382-6e702ea9d39b" Feb 18 00:51:48 crc kubenswrapper[4858]: I0218 00:51:48.350262 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=11.584948575 podStartE2EDuration="25.350244768s" podCreationTimestamp="2026-02-18 00:51:23 +0000 UTC" firstStartedPulling="2026-02-18 00:51:29.387070591 +0000 UTC m=+1042.692907323" lastFinishedPulling="2026-02-18 00:51:43.152366794 +0000 UTC m=+1056.458203516" observedRunningTime="2026-02-18 00:51:48.34214846 +0000 UTC m=+1061.647985202" watchObservedRunningTime="2026-02-18 00:51:48.350244768 +0000 UTC m=+1061.656081500" Feb 18 00:51:48 crc kubenswrapper[4858]: I0218 00:51:48.368703 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" podStartSLOduration=4.368680848 podStartE2EDuration="4.368680848s" podCreationTimestamp="2026-02-18 00:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:51:48.360723713 +0000 UTC m=+1061.666560455" watchObservedRunningTime="2026-02-18 00:51:48.368680848 +0000 UTC m=+1061.674517610" Feb 18 00:51:48 crc kubenswrapper[4858]: I0218 00:51:48.414128 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-compactor-0" podStartSLOduration=8.644608519 podStartE2EDuration="22.414062674s" podCreationTimestamp="2026-02-18 00:51:26 +0000 UTC" firstStartedPulling="2026-02-18 00:51:29.369516763 +0000 UTC m=+1042.675353495" lastFinishedPulling="2026-02-18 00:51:43.138970918 +0000 UTC m=+1056.444807650" observedRunningTime="2026-02-18 00:51:48.395261446 +0000 UTC m=+1061.701098188" watchObservedRunningTime="2026-02-18 00:51:48.414062674 +0000 UTC m=+1061.719899406" Feb 18 00:51:49 crc kubenswrapper[4858]: I0218 00:51:49.053352 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:49 crc kubenswrapper[4858]: I0218 00:51:49.115217 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:49 crc kubenswrapper[4858]: I0218 00:51:49.317017 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qn9qf" event={"ID":"131eb8ce-e6be-487f-b698-370140a1a338","Type":"ContainerStarted","Data":"3b9ee87bae9e3dc7c3d64a76ffc767b8ba9f7d3a56bf6f92b18c401d6dbd1a13"} Feb 18 00:51:49 crc kubenswrapper[4858]: I0218 00:51:49.317116 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qn9qf" event={"ID":"131eb8ce-e6be-487f-b698-370140a1a338","Type":"ContainerStarted","Data":"cf28ec313227cfe145a3db36a9fb03c6703bdc4018832a32053bed5f3f8d09e7"} Feb 18 00:51:49 crc kubenswrapper[4858]: I0218 00:51:49.317741 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-qn9qf" Feb 18 00:51:49 crc kubenswrapper[4858]: I0218 00:51:49.317914 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-qn9qf" Feb 18 00:51:49 crc kubenswrapper[4858]: I0218 00:51:49.321371 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"22183a64-a68c-47af-8352-b04603981c9d","Type":"ContainerStarted","Data":"e9aa18363f1631ce7e55151bab3fa80b806448015010385db55fb381e8d6b6fd"} Feb 18 00:51:49 crc kubenswrapper[4858]: I0218 00:51:49.322456 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:49 crc kubenswrapper[4858]: I0218 00:51:49.362461 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-qn9qf" podStartSLOduration=14.818204495 podStartE2EDuration="29.362435243s" podCreationTimestamp="2026-02-18 00:51:20 +0000 UTC" firstStartedPulling="2026-02-18 00:51:27.910369433 +0000 UTC m=+1041.216206165" lastFinishedPulling="2026-02-18 00:51:42.454600141 +0000 UTC m=+1055.760436913" observedRunningTime="2026-02-18 00:51:49.350883391 +0000 UTC m=+1062.656720163" watchObservedRunningTime="2026-02-18 00:51:49.362435243 +0000 UTC m=+1062.668271985" Feb 18 00:51:51 crc kubenswrapper[4858]: I0218 00:51:51.341151 4858 generic.go:334] "Generic (PLEG): container finished" podID="a845f908-18e9-47e2-bc4f-01308c8a69b3" containerID="9a9161a7d29c65c61ab452df26272597ae23d3b48ab38e3992ba258689d0bae5" exitCode=0 Feb 18 00:51:51 crc kubenswrapper[4858]: I0218 00:51:51.341234 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a845f908-18e9-47e2-bc4f-01308c8a69b3","Type":"ContainerDied","Data":"9a9161a7d29c65c61ab452df26272597ae23d3b48ab38e3992ba258689d0bae5"} Feb 18 00:51:51 crc kubenswrapper[4858]: I0218 00:51:51.404091 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 18 00:51:52 crc kubenswrapper[4858]: I0218 00:51:52.348974 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a845f908-18e9-47e2-bc4f-01308c8a69b3","Type":"ContainerStarted","Data":"71e3d8de140fb41dd247787edf86a2a3b0d8493bbc67eba3579ec5e69cb98334"} Feb 18 00:51:52 crc kubenswrapper[4858]: I0218 00:51:52.351617 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9" event={"ID":"a78eeeda-46f2-4d10-b160-97d477d1d80e","Type":"ContainerStarted","Data":"b269eca586b110315443d0dcd9a9a9c4f1060fd00cedfd0fad6234ce0aecb1fb"} Feb 18 00:51:52 crc kubenswrapper[4858]: I0218 00:51:52.351987 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9" Feb 18 00:51:52 crc kubenswrapper[4858]: I0218 00:51:52.381305 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=26.661497025 podStartE2EDuration="42.381290484s" podCreationTimestamp="2026-02-18 00:51:10 +0000 UTC" firstStartedPulling="2026-02-18 00:51:26.830876946 +0000 UTC m=+1040.136713678" lastFinishedPulling="2026-02-18 00:51:42.550670365 +0000 UTC m=+1055.856507137" observedRunningTime="2026-02-18 00:51:52.374460917 +0000 UTC m=+1065.680297649" watchObservedRunningTime="2026-02-18 00:51:52.381290484 +0000 UTC m=+1065.687127216" Feb 18 00:51:52 crc kubenswrapper[4858]: I0218 00:51:52.442093 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9" podStartSLOduration=-9223372010.412697 podStartE2EDuration="26.442079157s" podCreationTimestamp="2026-02-18 00:51:26 +0000 UTC" firstStartedPulling="2026-02-18 00:51:28.095785267 +0000 UTC m=+1041.401621999" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:51:52.403476185 +0000 UTC m=+1065.709312917" watchObservedRunningTime="2026-02-18 00:51:52.442079157 +0000 UTC m=+1065.747915889" Feb 18 00:51:53 crc kubenswrapper[4858]: I0218 00:51:53.361022 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"acb8b920-9bb7-42b7-8bf7-e8f6b5880654","Type":"ContainerStarted","Data":"af7b7b5e976eb07577702397a3bc9bcb150d3e6d6a724e484b08adde723e3394"} Feb 18 00:51:53 crc kubenswrapper[4858]: I0218 00:51:53.363060 4858 generic.go:334] "Generic (PLEG): container finished" podID="675206cd-1619-4598-81a9-96b66d09ce88" containerID="dacce4e7cee3cf6c79792f28005fad891b8439866e078de90ac1c26888a44874" exitCode=0 Feb 18 00:51:53 crc kubenswrapper[4858]: I0218 00:51:53.363175 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"675206cd-1619-4598-81a9-96b66d09ce88","Type":"ContainerDied","Data":"dacce4e7cee3cf6c79792f28005fad891b8439866e078de90ac1c26888a44874"} Feb 18 00:51:53 crc kubenswrapper[4858]: I0218 00:51:53.364574 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5" event={"ID":"0117af9e-cf65-489b-80f0-8f8c449baf92","Type":"ContainerStarted","Data":"e06c545f7682d33d7219caa67a6f8e15e62762597bd577d3b95d15242164ab3d"} Feb 18 00:51:53 crc kubenswrapper[4858]: I0218 00:51:53.364836 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5" Feb 18 00:51:53 crc kubenswrapper[4858]: I0218 00:51:53.441290 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5" podStartSLOduration=-9223372009.4135 podStartE2EDuration="27.441275724s" podCreationTimestamp="2026-02-18 00:51:26 +0000 UTC" firstStartedPulling="2026-02-18 00:51:28.101410184 +0000 UTC m=+1041.407246906" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:51:53.438772473 +0000 UTC m=+1066.744609195" watchObservedRunningTime="2026-02-18 00:51:53.441275724 +0000 UTC m=+1066.747112446" Feb 18 00:51:54 crc kubenswrapper[4858]: I0218 00:51:54.024702 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 18 00:51:54 crc kubenswrapper[4858]: I0218 00:51:54.858402 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" Feb 18 00:51:54 crc kubenswrapper[4858]: I0218 00:51:54.916670 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7gzrt"] Feb 18 00:51:54 crc kubenswrapper[4858]: I0218 00:51:54.916937 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-7gzrt" podUID="1783fb29-f6d7-47ae-8320-863d18857042" containerName="dnsmasq-dns" containerID="cri-o://132295f522f7ffb813bdb12d797807eece86176caf07a7471c216f5e52436e9c" gracePeriod=10 Feb 18 00:51:55 crc kubenswrapper[4858]: I0218 00:51:55.397090 4858 generic.go:334] "Generic (PLEG): container finished" podID="1783fb29-f6d7-47ae-8320-863d18857042" containerID="132295f522f7ffb813bdb12d797807eece86176caf07a7471c216f5e52436e9c" exitCode=0 Feb 18 00:51:55 crc kubenswrapper[4858]: I0218 00:51:55.397133 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7gzrt" event={"ID":"1783fb29-f6d7-47ae-8320-863d18857042","Type":"ContainerDied","Data":"132295f522f7ffb813bdb12d797807eece86176caf07a7471c216f5e52436e9c"} Feb 18 00:51:55 crc kubenswrapper[4858]: I0218 00:51:55.399730 4858 generic.go:334] "Generic (PLEG): container finished" podID="22183a64-a68c-47af-8352-b04603981c9d" containerID="e9aa18363f1631ce7e55151bab3fa80b806448015010385db55fb381e8d6b6fd" exitCode=0 Feb 18 00:51:55 crc kubenswrapper[4858]: I0218 00:51:55.399760 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"22183a64-a68c-47af-8352-b04603981c9d","Type":"ContainerDied","Data":"e9aa18363f1631ce7e55151bab3fa80b806448015010385db55fb381e8d6b6fd"} Feb 18 00:51:55 crc kubenswrapper[4858]: I0218 00:51:55.501275 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7gzrt" Feb 18 00:51:55 crc kubenswrapper[4858]: I0218 00:51:55.627029 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7kbz\" (UniqueName: \"kubernetes.io/projected/1783fb29-f6d7-47ae-8320-863d18857042-kube-api-access-q7kbz\") pod \"1783fb29-f6d7-47ae-8320-863d18857042\" (UID: \"1783fb29-f6d7-47ae-8320-863d18857042\") " Feb 18 00:51:55 crc kubenswrapper[4858]: I0218 00:51:55.627235 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1783fb29-f6d7-47ae-8320-863d18857042-config\") pod \"1783fb29-f6d7-47ae-8320-863d18857042\" (UID: \"1783fb29-f6d7-47ae-8320-863d18857042\") " Feb 18 00:51:55 crc kubenswrapper[4858]: I0218 00:51:55.627273 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1783fb29-f6d7-47ae-8320-863d18857042-dns-svc\") pod \"1783fb29-f6d7-47ae-8320-863d18857042\" (UID: \"1783fb29-f6d7-47ae-8320-863d18857042\") " Feb 18 00:51:55 crc kubenswrapper[4858]: I0218 00:51:55.643800 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1783fb29-f6d7-47ae-8320-863d18857042-kube-api-access-q7kbz" (OuterVolumeSpecName: "kube-api-access-q7kbz") pod "1783fb29-f6d7-47ae-8320-863d18857042" (UID: "1783fb29-f6d7-47ae-8320-863d18857042"). InnerVolumeSpecName "kube-api-access-q7kbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:51:55 crc kubenswrapper[4858]: I0218 00:51:55.682906 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1783fb29-f6d7-47ae-8320-863d18857042-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1783fb29-f6d7-47ae-8320-863d18857042" (UID: "1783fb29-f6d7-47ae-8320-863d18857042"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:51:55 crc kubenswrapper[4858]: I0218 00:51:55.690904 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1783fb29-f6d7-47ae-8320-863d18857042-config" (OuterVolumeSpecName: "config") pod "1783fb29-f6d7-47ae-8320-863d18857042" (UID: "1783fb29-f6d7-47ae-8320-863d18857042"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:51:55 crc kubenswrapper[4858]: I0218 00:51:55.729361 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7kbz\" (UniqueName: \"kubernetes.io/projected/1783fb29-f6d7-47ae-8320-863d18857042-kube-api-access-q7kbz\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:55 crc kubenswrapper[4858]: I0218 00:51:55.729396 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1783fb29-f6d7-47ae-8320-863d18857042-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:55 crc kubenswrapper[4858]: I0218 00:51:55.729408 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1783fb29-f6d7-47ae-8320-863d18857042-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.249199 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-nzjqm"] Feb 18 00:51:56 crc kubenswrapper[4858]: E0218 00:51:56.249544 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1783fb29-f6d7-47ae-8320-863d18857042" containerName="init" Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.249556 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1783fb29-f6d7-47ae-8320-863d18857042" containerName="init" Feb 18 00:51:56 crc kubenswrapper[4858]: E0218 00:51:56.249578 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1783fb29-f6d7-47ae-8320-863d18857042" containerName="dnsmasq-dns" Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.249584 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1783fb29-f6d7-47ae-8320-863d18857042" containerName="dnsmasq-dns" Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.249725 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1783fb29-f6d7-47ae-8320-863d18857042" containerName="dnsmasq-dns" Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.250578 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nzjqm" Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.273470 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nzjqm"] Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.341384 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-config\") pod \"dnsmasq-dns-698758b865-nzjqm\" (UID: \"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4\") " pod="openstack/dnsmasq-dns-698758b865-nzjqm" Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.341670 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-nzjqm\" (UID: \"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4\") " pod="openstack/dnsmasq-dns-698758b865-nzjqm" Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.341707 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-dns-svc\") pod \"dnsmasq-dns-698758b865-nzjqm\" (UID: \"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4\") " pod="openstack/dnsmasq-dns-698758b865-nzjqm" Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.341724 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-nzjqm\" (UID: \"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4\") " pod="openstack/dnsmasq-dns-698758b865-nzjqm" Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.341750 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr76d\" (UniqueName: \"kubernetes.io/projected/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-kube-api-access-rr76d\") pod \"dnsmasq-dns-698758b865-nzjqm\" (UID: \"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4\") " pod="openstack/dnsmasq-dns-698758b865-nzjqm" Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.407769 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7gzrt" event={"ID":"1783fb29-f6d7-47ae-8320-863d18857042","Type":"ContainerDied","Data":"40bd4c6882e94770f52466231ab98d54d82c283490fb05bea87a5c56be7ba8bd"} Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.407823 4858 scope.go:117] "RemoveContainer" containerID="132295f522f7ffb813bdb12d797807eece86176caf07a7471c216f5e52436e9c" Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.407832 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7gzrt" Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.434743 4858 scope.go:117] "RemoveContainer" containerID="72c2a20168959b261983fe0a73267472e734e1a5ad374ee38f69849a234d483e" Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.444303 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-nzjqm\" (UID: \"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4\") " pod="openstack/dnsmasq-dns-698758b865-nzjqm" Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.444354 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-dns-svc\") pod \"dnsmasq-dns-698758b865-nzjqm\" (UID: \"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4\") " pod="openstack/dnsmasq-dns-698758b865-nzjqm" Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.444374 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-nzjqm\" (UID: \"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4\") " pod="openstack/dnsmasq-dns-698758b865-nzjqm" Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.444396 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr76d\" (UniqueName: \"kubernetes.io/projected/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-kube-api-access-rr76d\") pod \"dnsmasq-dns-698758b865-nzjqm\" (UID: \"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4\") " pod="openstack/dnsmasq-dns-698758b865-nzjqm" Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.444534 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-config\") pod \"dnsmasq-dns-698758b865-nzjqm\" (UID: \"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4\") " pod="openstack/dnsmasq-dns-698758b865-nzjqm" Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.445348 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-config\") pod \"dnsmasq-dns-698758b865-nzjqm\" (UID: \"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4\") " pod="openstack/dnsmasq-dns-698758b865-nzjqm" Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.445947 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-nzjqm\" (UID: \"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4\") " pod="openstack/dnsmasq-dns-698758b865-nzjqm" Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.446349 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-nzjqm\" (UID: \"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4\") " pod="openstack/dnsmasq-dns-698758b865-nzjqm" Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.446526 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-dns-svc\") pod \"dnsmasq-dns-698758b865-nzjqm\" (UID: \"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4\") " pod="openstack/dnsmasq-dns-698758b865-nzjqm" Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.454921 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7gzrt"] Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.461299 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7gzrt"] Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.465455 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr76d\" (UniqueName: \"kubernetes.io/projected/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-kube-api-access-rr76d\") pod \"dnsmasq-dns-698758b865-nzjqm\" (UID: \"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4\") " pod="openstack/dnsmasq-dns-698758b865-nzjqm" Feb 18 00:51:56 crc kubenswrapper[4858]: I0218 00:51:56.575750 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nzjqm" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.047847 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nzjqm"] Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.347525 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.354163 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.357653 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.357680 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.357732 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.358804 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-7j5n2" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.392688 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.421070 4858 generic.go:334] "Generic (PLEG): container finished" podID="086d4d86-55ee-4c9b-b1c0-5cce4212d8e4" containerID="628528138c29d44263f6ec9ec429257429b2aa999ab82c96f6e33c640535cac8" exitCode=0 Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.422360 4858 generic.go:334] "Generic (PLEG): container finished" podID="acb8b920-9bb7-42b7-8bf7-e8f6b5880654" containerID="af7b7b5e976eb07577702397a3bc9bcb150d3e6d6a724e484b08adde723e3394" exitCode=0 Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.432466 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1783fb29-f6d7-47ae-8320-863d18857042" path="/var/lib/kubelet/pods/1783fb29-f6d7-47ae-8320-863d18857042/volumes" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.437070 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nzjqm" event={"ID":"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4","Type":"ContainerDied","Data":"628528138c29d44263f6ec9ec429257429b2aa999ab82c96f6e33c640535cac8"} Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.437125 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nzjqm" event={"ID":"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4","Type":"ContainerStarted","Data":"adf0cc1347d8215a4c1d9c0eaaac93bd06a3e1746a5aa06ffd9a99d144852ad6"} Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.437139 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"acb8b920-9bb7-42b7-8bf7-e8f6b5880654","Type":"ContainerDied","Data":"af7b7b5e976eb07577702397a3bc9bcb150d3e6d6a724e484b08adde723e3394"} Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.467201 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlgnj\" (UniqueName: \"kubernetes.io/projected/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-kube-api-access-hlgnj\") pod \"swift-storage-0\" (UID: \"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07\") " pod="openstack/swift-storage-0" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.467274 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-etc-swift\") pod \"swift-storage-0\" (UID: \"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07\") " pod="openstack/swift-storage-0" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.467311 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-cache\") pod \"swift-storage-0\" (UID: \"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07\") " pod="openstack/swift-storage-0" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.467342 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-lock\") pod \"swift-storage-0\" (UID: \"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07\") " pod="openstack/swift-storage-0" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.467392 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-11933551-e199-4e19-adbd-641962343c65\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11933551-e199-4e19-adbd-641962343c65\") pod \"swift-storage-0\" (UID: \"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07\") " pod="openstack/swift-storage-0" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.467412 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07\") " pod="openstack/swift-storage-0" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.569460 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-etc-swift\") pod \"swift-storage-0\" (UID: \"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07\") " pod="openstack/swift-storage-0" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.569535 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-cache\") pod \"swift-storage-0\" (UID: \"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07\") " pod="openstack/swift-storage-0" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.569598 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-lock\") pod \"swift-storage-0\" (UID: \"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07\") " pod="openstack/swift-storage-0" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.569681 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-11933551-e199-4e19-adbd-641962343c65\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11933551-e199-4e19-adbd-641962343c65\") pod \"swift-storage-0\" (UID: \"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07\") " pod="openstack/swift-storage-0" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.569705 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07\") " pod="openstack/swift-storage-0" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.569761 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlgnj\" (UniqueName: \"kubernetes.io/projected/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-kube-api-access-hlgnj\") pod \"swift-storage-0\" (UID: \"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07\") " pod="openstack/swift-storage-0" Feb 18 00:51:57 crc kubenswrapper[4858]: E0218 00:51:57.570356 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 00:51:57 crc kubenswrapper[4858]: E0218 00:51:57.570380 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 00:51:57 crc kubenswrapper[4858]: E0218 00:51:57.570432 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-etc-swift podName:d0600ce0-ec0e-48b8-b22e-7f94ffd40c07 nodeName:}" failed. No retries permitted until 2026-02-18 00:51:58.070409723 +0000 UTC m=+1071.376246595 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-etc-swift") pod "swift-storage-0" (UID: "d0600ce0-ec0e-48b8-b22e-7f94ffd40c07") : configmap "swift-ring-files" not found Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.570591 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-cache\") pod \"swift-storage-0\" (UID: \"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07\") " pod="openstack/swift-storage-0" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.570681 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-lock\") pod \"swift-storage-0\" (UID: \"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07\") " pod="openstack/swift-storage-0" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.576817 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07\") " pod="openstack/swift-storage-0" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.579372 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.579402 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-11933551-e199-4e19-adbd-641962343c65\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11933551-e199-4e19-adbd-641962343c65\") pod \"swift-storage-0\" (UID: \"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0e6105f209f9adef61b75228090326dd9441f4b95527736e498420c88025f941/globalmount\"" pod="openstack/swift-storage-0" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.588112 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlgnj\" (UniqueName: \"kubernetes.io/projected/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-kube-api-access-hlgnj\") pod \"swift-storage-0\" (UID: \"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07\") " pod="openstack/swift-storage-0" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.621607 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-11933551-e199-4e19-adbd-641962343c65\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11933551-e199-4e19-adbd-641962343c65\") pod \"swift-storage-0\" (UID: \"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07\") " pod="openstack/swift-storage-0" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.884135 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-gc9g7"] Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.886868 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gc9g7" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.892146 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.892637 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.895728 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.905269 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-gc9g7"] Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.980171 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-swiftconf\") pod \"swift-ring-rebalance-gc9g7\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " pod="openstack/swift-ring-rebalance-gc9g7" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.980273 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmfng\" (UniqueName: \"kubernetes.io/projected/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-kube-api-access-dmfng\") pod \"swift-ring-rebalance-gc9g7\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " pod="openstack/swift-ring-rebalance-gc9g7" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.980314 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-etc-swift\") pod \"swift-ring-rebalance-gc9g7\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " pod="openstack/swift-ring-rebalance-gc9g7" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.980340 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-ring-data-devices\") pod \"swift-ring-rebalance-gc9g7\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " pod="openstack/swift-ring-rebalance-gc9g7" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.980404 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-scripts\") pod \"swift-ring-rebalance-gc9g7\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " pod="openstack/swift-ring-rebalance-gc9g7" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.980436 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-dispersionconf\") pod \"swift-ring-rebalance-gc9g7\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " pod="openstack/swift-ring-rebalance-gc9g7" Feb 18 00:51:57 crc kubenswrapper[4858]: I0218 00:51:57.980451 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-combined-ca-bundle\") pod \"swift-ring-rebalance-gc9g7\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " pod="openstack/swift-ring-rebalance-gc9g7" Feb 18 00:51:58 crc kubenswrapper[4858]: I0218 00:51:58.082356 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-dispersionconf\") pod \"swift-ring-rebalance-gc9g7\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " pod="openstack/swift-ring-rebalance-gc9g7" Feb 18 00:51:58 crc kubenswrapper[4858]: I0218 00:51:58.082406 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-combined-ca-bundle\") pod \"swift-ring-rebalance-gc9g7\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " pod="openstack/swift-ring-rebalance-gc9g7" Feb 18 00:51:58 crc kubenswrapper[4858]: I0218 00:51:58.082461 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-swiftconf\") pod \"swift-ring-rebalance-gc9g7\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " pod="openstack/swift-ring-rebalance-gc9g7" Feb 18 00:51:58 crc kubenswrapper[4858]: I0218 00:51:58.082560 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-etc-swift\") pod \"swift-storage-0\" (UID: \"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07\") " pod="openstack/swift-storage-0" Feb 18 00:51:58 crc kubenswrapper[4858]: I0218 00:51:58.082587 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmfng\" (UniqueName: \"kubernetes.io/projected/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-kube-api-access-dmfng\") pod \"swift-ring-rebalance-gc9g7\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " pod="openstack/swift-ring-rebalance-gc9g7" Feb 18 00:51:58 crc kubenswrapper[4858]: I0218 00:51:58.082611 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-etc-swift\") pod \"swift-ring-rebalance-gc9g7\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " pod="openstack/swift-ring-rebalance-gc9g7" Feb 18 00:51:58 crc kubenswrapper[4858]: I0218 00:51:58.082645 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-ring-data-devices\") pod \"swift-ring-rebalance-gc9g7\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " pod="openstack/swift-ring-rebalance-gc9g7" Feb 18 00:51:58 crc kubenswrapper[4858]: I0218 00:51:58.082725 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-scripts\") pod \"swift-ring-rebalance-gc9g7\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " pod="openstack/swift-ring-rebalance-gc9g7" Feb 18 00:51:58 crc kubenswrapper[4858]: I0218 00:51:58.083896 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-scripts\") pod \"swift-ring-rebalance-gc9g7\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " pod="openstack/swift-ring-rebalance-gc9g7" Feb 18 00:51:58 crc kubenswrapper[4858]: E0218 00:51:58.084439 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 00:51:58 crc kubenswrapper[4858]: E0218 00:51:58.084463 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 00:51:58 crc kubenswrapper[4858]: E0218 00:51:58.084540 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-etc-swift podName:d0600ce0-ec0e-48b8-b22e-7f94ffd40c07 nodeName:}" failed. No retries permitted until 2026-02-18 00:51:59.084522226 +0000 UTC m=+1072.390358958 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-etc-swift") pod "swift-storage-0" (UID: "d0600ce0-ec0e-48b8-b22e-7f94ffd40c07") : configmap "swift-ring-files" not found Feb 18 00:51:58 crc kubenswrapper[4858]: I0218 00:51:58.085202 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-etc-swift\") pod \"swift-ring-rebalance-gc9g7\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " pod="openstack/swift-ring-rebalance-gc9g7" Feb 18 00:51:58 crc kubenswrapper[4858]: I0218 00:51:58.085301 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-ring-data-devices\") pod \"swift-ring-rebalance-gc9g7\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " pod="openstack/swift-ring-rebalance-gc9g7" Feb 18 00:51:58 crc kubenswrapper[4858]: I0218 00:51:58.089196 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-swiftconf\") pod \"swift-ring-rebalance-gc9g7\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " pod="openstack/swift-ring-rebalance-gc9g7" Feb 18 00:51:58 crc kubenswrapper[4858]: I0218 00:51:58.090070 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-dispersionconf\") pod \"swift-ring-rebalance-gc9g7\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " pod="openstack/swift-ring-rebalance-gc9g7" Feb 18 00:51:58 crc kubenswrapper[4858]: I0218 00:51:58.090372 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-combined-ca-bundle\") pod \"swift-ring-rebalance-gc9g7\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " pod="openstack/swift-ring-rebalance-gc9g7" Feb 18 00:51:58 crc kubenswrapper[4858]: I0218 00:51:58.103580 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmfng\" (UniqueName: \"kubernetes.io/projected/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-kube-api-access-dmfng\") pod \"swift-ring-rebalance-gc9g7\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " pod="openstack/swift-ring-rebalance-gc9g7" Feb 18 00:51:58 crc kubenswrapper[4858]: I0218 00:51:58.212027 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gc9g7" Feb 18 00:51:58 crc kubenswrapper[4858]: I0218 00:51:58.435187 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nzjqm" event={"ID":"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4","Type":"ContainerStarted","Data":"db35b96b039b721993ad237b685efce5f9ac89543137f076523e4bb6de788a10"} Feb 18 00:51:58 crc kubenswrapper[4858]: I0218 00:51:58.435258 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-nzjqm" Feb 18 00:51:58 crc kubenswrapper[4858]: I0218 00:51:58.442647 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"acb8b920-9bb7-42b7-8bf7-e8f6b5880654","Type":"ContainerStarted","Data":"d9fc277887d087c59b35688a60ed0c3da562f7049095ed64d3fda5d26186a7b6"} Feb 18 00:51:58 crc kubenswrapper[4858]: I0218 00:51:58.458336 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-nzjqm" podStartSLOduration=2.458316905 podStartE2EDuration="2.458316905s" podCreationTimestamp="2026-02-18 00:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:51:58.454598285 +0000 UTC m=+1071.760435017" watchObservedRunningTime="2026-02-18 00:51:58.458316905 +0000 UTC m=+1071.764153647" Feb 18 00:51:58 crc kubenswrapper[4858]: I0218 00:51:58.485400 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371990.369394 podStartE2EDuration="46.485381116s" podCreationTimestamp="2026-02-18 00:51:12 +0000 UTC" firstStartedPulling="2026-02-18 00:51:26.990009389 +0000 UTC m=+1040.295846121" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:51:58.475653559 +0000 UTC m=+1071.781490301" watchObservedRunningTime="2026-02-18 00:51:58.485381116 +0000 UTC m=+1071.791217848" Feb 18 00:51:59 crc kubenswrapper[4858]: E0218 00:51:59.104589 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 00:51:59 crc kubenswrapper[4858]: E0218 00:51:59.104623 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 00:51:59 crc kubenswrapper[4858]: E0218 00:51:59.104684 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-etc-swift podName:d0600ce0-ec0e-48b8-b22e-7f94ffd40c07 nodeName:}" failed. No retries permitted until 2026-02-18 00:52:01.104665224 +0000 UTC m=+1074.410501956 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-etc-swift") pod "swift-storage-0" (UID: "d0600ce0-ec0e-48b8-b22e-7f94ffd40c07") : configmap "swift-ring-files" not found Feb 18 00:51:59 crc kubenswrapper[4858]: I0218 00:51:59.104347 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-etc-swift\") pod \"swift-storage-0\" (UID: \"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07\") " pod="openstack/swift-storage-0" Feb 18 00:52:01 crc kubenswrapper[4858]: E0218 00:52:01.013719 4858 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.12:56128->38.102.83.12:33927: write tcp 38.102.83.12:56128->38.102.83.12:33927: write: broken pipe Feb 18 00:52:01 crc kubenswrapper[4858]: I0218 00:52:01.160435 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-etc-swift\") pod \"swift-storage-0\" (UID: \"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07\") " pod="openstack/swift-storage-0" Feb 18 00:52:01 crc kubenswrapper[4858]: E0218 00:52:01.160784 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 00:52:01 crc kubenswrapper[4858]: E0218 00:52:01.161092 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 00:52:01 crc kubenswrapper[4858]: E0218 00:52:01.161218 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-etc-swift podName:d0600ce0-ec0e-48b8-b22e-7f94ffd40c07 nodeName:}" failed. No retries permitted until 2026-02-18 00:52:05.161183978 +0000 UTC m=+1078.467020750 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-etc-swift") pod "swift-storage-0" (UID: "d0600ce0-ec0e-48b8-b22e-7f94ffd40c07") : configmap "swift-ring-files" not found Feb 18 00:52:01 crc kubenswrapper[4858]: I0218 00:52:01.965946 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 18 00:52:01 crc kubenswrapper[4858]: I0218 00:52:01.966057 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 18 00:52:02 crc kubenswrapper[4858]: I0218 00:52:02.094043 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 18 00:52:02 crc kubenswrapper[4858]: I0218 00:52:02.613023 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 18 00:52:02 crc kubenswrapper[4858]: I0218 00:52:02.731092 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-gc9g7"] Feb 18 00:52:02 crc kubenswrapper[4858]: W0218 00:52:02.732175 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb7b9b3c_2a05_45ae_814b_f7a5058ee1c2.slice/crio-c01ca6a9d60597dd5ffcce117585d2807079e6616bf402b7e216c06b95d04eab WatchSource:0}: Error finding container c01ca6a9d60597dd5ffcce117585d2807079e6616bf402b7e216c06b95d04eab: Status 404 returned error can't find the container with id c01ca6a9d60597dd5ffcce117585d2807079e6616bf402b7e216c06b95d04eab Feb 18 00:52:03 crc kubenswrapper[4858]: I0218 00:52:03.502882 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gc9g7" event={"ID":"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2","Type":"ContainerStarted","Data":"c01ca6a9d60597dd5ffcce117585d2807079e6616bf402b7e216c06b95d04eab"} Feb 18 00:52:03 crc kubenswrapper[4858]: I0218 00:52:03.506528 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7eb932c6-138e-44fc-b382-6e702ea9d39b","Type":"ContainerStarted","Data":"30bd01620cfefc58cad065e73d1799a42b019e08fa81be39b742043e4ad1ebb8"} Feb 18 00:52:03 crc kubenswrapper[4858]: I0218 00:52:03.509559 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fvnsh" event={"ID":"19953a4a-b2c2-42f5-a48b-a217cf7b7ab0","Type":"ContainerStarted","Data":"d2b9a6b4ad48a65b3c29e288465dee361bcb36bb2ac62fb03bf9b101321754bc"} Feb 18 00:52:03 crc kubenswrapper[4858]: I0218 00:52:03.510221 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-fvnsh" Feb 18 00:52:03 crc kubenswrapper[4858]: I0218 00:52:03.511960 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9788397b-0bb7-43f9-9ac8-69b765750ecb","Type":"ContainerStarted","Data":"6247c2457528a5c07016e1c2d7a5d682e922d871495ccad36689af7e292de274"} Feb 18 00:52:03 crc kubenswrapper[4858]: I0218 00:52:03.512180 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 00:52:03 crc kubenswrapper[4858]: I0218 00:52:03.515472 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"675206cd-1619-4598-81a9-96b66d09ce88","Type":"ContainerStarted","Data":"75d839d7eadbb020fa34197877d9f2b23d2adc7b88c4576909e777858272040d"} Feb 18 00:52:03 crc kubenswrapper[4858]: I0218 00:52:03.535788 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=11.397651089 podStartE2EDuration="45.5357677s" podCreationTimestamp="2026-02-18 00:51:18 +0000 UTC" firstStartedPulling="2026-02-18 00:51:28.143099682 +0000 UTC m=+1041.448936414" lastFinishedPulling="2026-02-18 00:52:02.281216283 +0000 UTC m=+1075.587053025" observedRunningTime="2026-02-18 00:52:03.527701933 +0000 UTC m=+1076.833538675" watchObservedRunningTime="2026-02-18 00:52:03.5357677 +0000 UTC m=+1076.841604442" Feb 18 00:52:03 crc kubenswrapper[4858]: I0218 00:52:03.553661 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-fvnsh" podStartSLOduration=13.519287094 podStartE2EDuration="43.553638436s" podCreationTimestamp="2026-02-18 00:51:20 +0000 UTC" firstStartedPulling="2026-02-18 00:51:27.80122139 +0000 UTC m=+1041.107058122" lastFinishedPulling="2026-02-18 00:51:57.835572732 +0000 UTC m=+1071.141409464" observedRunningTime="2026-02-18 00:52:03.540712991 +0000 UTC m=+1076.846549743" watchObservedRunningTime="2026-02-18 00:52:03.553638436 +0000 UTC m=+1076.859475168" Feb 18 00:52:03 crc kubenswrapper[4858]: I0218 00:52:03.562955 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=14.049616878 podStartE2EDuration="48.562933923s" podCreationTimestamp="2026-02-18 00:51:15 +0000 UTC" firstStartedPulling="2026-02-18 00:51:27.765996071 +0000 UTC m=+1041.071832803" lastFinishedPulling="2026-02-18 00:52:02.279313106 +0000 UTC m=+1075.585149848" observedRunningTime="2026-02-18 00:52:03.556856025 +0000 UTC m=+1076.862692757" watchObservedRunningTime="2026-02-18 00:52:03.562933923 +0000 UTC m=+1076.868770655" Feb 18 00:52:03 crc kubenswrapper[4858]: I0218 00:52:03.873307 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 18 00:52:03 crc kubenswrapper[4858]: I0218 00:52:03.873697 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 18 00:52:04 crc kubenswrapper[4858]: I0218 00:52:04.257340 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-98e5-account-create-update-hhlbm"] Feb 18 00:52:04 crc kubenswrapper[4858]: I0218 00:52:04.258822 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-98e5-account-create-update-hhlbm" Feb 18 00:52:04 crc kubenswrapper[4858]: I0218 00:52:04.262467 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 18 00:52:04 crc kubenswrapper[4858]: I0218 00:52:04.273363 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-98e5-account-create-update-hhlbm"] Feb 18 00:52:04 crc kubenswrapper[4858]: I0218 00:52:04.322481 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-7pckt"] Feb 18 00:52:04 crc kubenswrapper[4858]: I0218 00:52:04.324360 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7pckt" Feb 18 00:52:04 crc kubenswrapper[4858]: I0218 00:52:04.343626 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7pckt"] Feb 18 00:52:04 crc kubenswrapper[4858]: I0218 00:52:04.387928 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4682a2f1-a646-4f7e-9b03-578bbe315f48-operator-scripts\") pod \"glance-98e5-account-create-update-hhlbm\" (UID: \"4682a2f1-a646-4f7e-9b03-578bbe315f48\") " pod="openstack/glance-98e5-account-create-update-hhlbm" Feb 18 00:52:04 crc kubenswrapper[4858]: I0218 00:52:04.388270 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmbmh\" (UniqueName: \"kubernetes.io/projected/4682a2f1-a646-4f7e-9b03-578bbe315f48-kube-api-access-dmbmh\") pod \"glance-98e5-account-create-update-hhlbm\" (UID: \"4682a2f1-a646-4f7e-9b03-578bbe315f48\") " pod="openstack/glance-98e5-account-create-update-hhlbm" Feb 18 00:52:04 crc kubenswrapper[4858]: I0218 00:52:04.408154 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 18 00:52:04 crc kubenswrapper[4858]: I0218 00:52:04.489860 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2bhj\" (UniqueName: \"kubernetes.io/projected/f78e85b8-d7a3-4b15-991b-6104ba1ffe95-kube-api-access-x2bhj\") pod \"glance-db-create-7pckt\" (UID: \"f78e85b8-d7a3-4b15-991b-6104ba1ffe95\") " pod="openstack/glance-db-create-7pckt" Feb 18 00:52:04 crc kubenswrapper[4858]: I0218 00:52:04.491824 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f78e85b8-d7a3-4b15-991b-6104ba1ffe95-operator-scripts\") pod \"glance-db-create-7pckt\" (UID: \"f78e85b8-d7a3-4b15-991b-6104ba1ffe95\") " pod="openstack/glance-db-create-7pckt" Feb 18 00:52:04 crc kubenswrapper[4858]: I0218 00:52:04.491955 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4682a2f1-a646-4f7e-9b03-578bbe315f48-operator-scripts\") pod \"glance-98e5-account-create-update-hhlbm\" (UID: \"4682a2f1-a646-4f7e-9b03-578bbe315f48\") " pod="openstack/glance-98e5-account-create-update-hhlbm" Feb 18 00:52:04 crc kubenswrapper[4858]: I0218 00:52:04.492066 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmbmh\" (UniqueName: \"kubernetes.io/projected/4682a2f1-a646-4f7e-9b03-578bbe315f48-kube-api-access-dmbmh\") pod \"glance-98e5-account-create-update-hhlbm\" (UID: \"4682a2f1-a646-4f7e-9b03-578bbe315f48\") " pod="openstack/glance-98e5-account-create-update-hhlbm" Feb 18 00:52:04 crc kubenswrapper[4858]: I0218 00:52:04.494769 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4682a2f1-a646-4f7e-9b03-578bbe315f48-operator-scripts\") pod \"glance-98e5-account-create-update-hhlbm\" (UID: \"4682a2f1-a646-4f7e-9b03-578bbe315f48\") " pod="openstack/glance-98e5-account-create-update-hhlbm" Feb 18 00:52:04 crc kubenswrapper[4858]: I0218 00:52:04.508385 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmbmh\" (UniqueName: \"kubernetes.io/projected/4682a2f1-a646-4f7e-9b03-578bbe315f48-kube-api-access-dmbmh\") pod \"glance-98e5-account-create-update-hhlbm\" (UID: \"4682a2f1-a646-4f7e-9b03-578bbe315f48\") " pod="openstack/glance-98e5-account-create-update-hhlbm" Feb 18 00:52:04 crc kubenswrapper[4858]: I0218 00:52:04.593822 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f78e85b8-d7a3-4b15-991b-6104ba1ffe95-operator-scripts\") pod \"glance-db-create-7pckt\" (UID: \"f78e85b8-d7a3-4b15-991b-6104ba1ffe95\") " pod="openstack/glance-db-create-7pckt" Feb 18 00:52:04 crc kubenswrapper[4858]: I0218 00:52:04.594149 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2bhj\" (UniqueName: \"kubernetes.io/projected/f78e85b8-d7a3-4b15-991b-6104ba1ffe95-kube-api-access-x2bhj\") pod \"glance-db-create-7pckt\" (UID: \"f78e85b8-d7a3-4b15-991b-6104ba1ffe95\") " pod="openstack/glance-db-create-7pckt" Feb 18 00:52:04 crc kubenswrapper[4858]: I0218 00:52:04.595808 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f78e85b8-d7a3-4b15-991b-6104ba1ffe95-operator-scripts\") pod \"glance-db-create-7pckt\" (UID: \"f78e85b8-d7a3-4b15-991b-6104ba1ffe95\") " pod="openstack/glance-db-create-7pckt" Feb 18 00:52:04 crc kubenswrapper[4858]: I0218 00:52:04.601461 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 18 00:52:04 crc kubenswrapper[4858]: I0218 00:52:04.613676 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2bhj\" (UniqueName: \"kubernetes.io/projected/f78e85b8-d7a3-4b15-991b-6104ba1ffe95-kube-api-access-x2bhj\") pod \"glance-db-create-7pckt\" (UID: \"f78e85b8-d7a3-4b15-991b-6104ba1ffe95\") " pod="openstack/glance-db-create-7pckt" Feb 18 00:52:04 crc kubenswrapper[4858]: I0218 00:52:04.669524 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-98e5-account-create-update-hhlbm" Feb 18 00:52:04 crc kubenswrapper[4858]: I0218 00:52:04.835424 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7pckt" Feb 18 00:52:04 crc kubenswrapper[4858]: I0218 00:52:04.979107 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-5l2bs"] Feb 18 00:52:04 crc kubenswrapper[4858]: I0218 00:52:04.980516 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5l2bs" Feb 18 00:52:04 crc kubenswrapper[4858]: I0218 00:52:04.988420 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5l2bs"] Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.100901 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-2c0e-account-create-update-4kjnq"] Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.102641 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2c0e-account-create-update-4kjnq" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.109726 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eba52de-f7c0-4843-941d-20a57d0e012b-operator-scripts\") pod \"keystone-db-create-5l2bs\" (UID: \"8eba52de-f7c0-4843-941d-20a57d0e012b\") " pod="openstack/keystone-db-create-5l2bs" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.110237 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vj8q\" (UniqueName: \"kubernetes.io/projected/8eba52de-f7c0-4843-941d-20a57d0e012b-kube-api-access-4vj8q\") pod \"keystone-db-create-5l2bs\" (UID: \"8eba52de-f7c0-4843-941d-20a57d0e012b\") " pod="openstack/keystone-db-create-5l2bs" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.118682 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.122202 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2c0e-account-create-update-4kjnq"] Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.139083 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-98e5-account-create-update-hhlbm"] Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.211845 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7czr\" (UniqueName: \"kubernetes.io/projected/2bdc4a39-ee6d-47eb-bb82-665a206a9690-kube-api-access-p7czr\") pod \"keystone-2c0e-account-create-update-4kjnq\" (UID: \"2bdc4a39-ee6d-47eb-bb82-665a206a9690\") " pod="openstack/keystone-2c0e-account-create-update-4kjnq" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.211900 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vj8q\" (UniqueName: \"kubernetes.io/projected/8eba52de-f7c0-4843-941d-20a57d0e012b-kube-api-access-4vj8q\") pod \"keystone-db-create-5l2bs\" (UID: \"8eba52de-f7c0-4843-941d-20a57d0e012b\") " pod="openstack/keystone-db-create-5l2bs" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.211930 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bdc4a39-ee6d-47eb-bb82-665a206a9690-operator-scripts\") pod \"keystone-2c0e-account-create-update-4kjnq\" (UID: \"2bdc4a39-ee6d-47eb-bb82-665a206a9690\") " pod="openstack/keystone-2c0e-account-create-update-4kjnq" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.212062 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-etc-swift\") pod \"swift-storage-0\" (UID: \"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07\") " pod="openstack/swift-storage-0" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.212099 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eba52de-f7c0-4843-941d-20a57d0e012b-operator-scripts\") pod \"keystone-db-create-5l2bs\" (UID: \"8eba52de-f7c0-4843-941d-20a57d0e012b\") " pod="openstack/keystone-db-create-5l2bs" Feb 18 00:52:05 crc kubenswrapper[4858]: E0218 00:52:05.212246 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 00:52:05 crc kubenswrapper[4858]: E0218 00:52:05.212280 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 00:52:05 crc kubenswrapper[4858]: E0218 00:52:05.212352 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-etc-swift podName:d0600ce0-ec0e-48b8-b22e-7f94ffd40c07 nodeName:}" failed. No retries permitted until 2026-02-18 00:52:13.212334024 +0000 UTC m=+1086.518170756 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-etc-swift") pod "swift-storage-0" (UID: "d0600ce0-ec0e-48b8-b22e-7f94ffd40c07") : configmap "swift-ring-files" not found Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.214256 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eba52de-f7c0-4843-941d-20a57d0e012b-operator-scripts\") pod \"keystone-db-create-5l2bs\" (UID: \"8eba52de-f7c0-4843-941d-20a57d0e012b\") " pod="openstack/keystone-db-create-5l2bs" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.261509 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7pckt"] Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.266368 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vj8q\" (UniqueName: \"kubernetes.io/projected/8eba52de-f7c0-4843-941d-20a57d0e012b-kube-api-access-4vj8q\") pod \"keystone-db-create-5l2bs\" (UID: \"8eba52de-f7c0-4843-941d-20a57d0e012b\") " pod="openstack/keystone-db-create-5l2bs" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.276144 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-7tfnm"] Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.277353 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7tfnm" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.282877 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-7tfnm"] Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.303335 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5l2bs" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.309709 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.309737 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.313341 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bdc4a39-ee6d-47eb-bb82-665a206a9690-operator-scripts\") pod \"keystone-2c0e-account-create-update-4kjnq\" (UID: \"2bdc4a39-ee6d-47eb-bb82-665a206a9690\") " pod="openstack/keystone-2c0e-account-create-update-4kjnq" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.313504 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7czr\" (UniqueName: \"kubernetes.io/projected/2bdc4a39-ee6d-47eb-bb82-665a206a9690-kube-api-access-p7czr\") pod \"keystone-2c0e-account-create-update-4kjnq\" (UID: \"2bdc4a39-ee6d-47eb-bb82-665a206a9690\") " pod="openstack/keystone-2c0e-account-create-update-4kjnq" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.314321 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bdc4a39-ee6d-47eb-bb82-665a206a9690-operator-scripts\") pod \"keystone-2c0e-account-create-update-4kjnq\" (UID: \"2bdc4a39-ee6d-47eb-bb82-665a206a9690\") " pod="openstack/keystone-2c0e-account-create-update-4kjnq" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.331167 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7czr\" (UniqueName: \"kubernetes.io/projected/2bdc4a39-ee6d-47eb-bb82-665a206a9690-kube-api-access-p7czr\") pod \"keystone-2c0e-account-create-update-4kjnq\" (UID: \"2bdc4a39-ee6d-47eb-bb82-665a206a9690\") " pod="openstack/keystone-2c0e-account-create-update-4kjnq" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.343274 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.388065 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-3a23-account-create-update-tl98n"] Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.389180 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3a23-account-create-update-tl98n" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.390682 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.402693 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3a23-account-create-update-tl98n"] Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.414653 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2qmp\" (UniqueName: \"kubernetes.io/projected/823f6441-ed95-4f51-82c1-b8063d153460-kube-api-access-t2qmp\") pod \"placement-db-create-7tfnm\" (UID: \"823f6441-ed95-4f51-82c1-b8063d153460\") " pod="openstack/placement-db-create-7tfnm" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.414685 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/823f6441-ed95-4f51-82c1-b8063d153460-operator-scripts\") pod \"placement-db-create-7tfnm\" (UID: \"823f6441-ed95-4f51-82c1-b8063d153460\") " pod="openstack/placement-db-create-7tfnm" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.436048 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2c0e-account-create-update-4kjnq" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.516349 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2qmp\" (UniqueName: \"kubernetes.io/projected/823f6441-ed95-4f51-82c1-b8063d153460-kube-api-access-t2qmp\") pod \"placement-db-create-7tfnm\" (UID: \"823f6441-ed95-4f51-82c1-b8063d153460\") " pod="openstack/placement-db-create-7tfnm" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.516614 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/823f6441-ed95-4f51-82c1-b8063d153460-operator-scripts\") pod \"placement-db-create-7tfnm\" (UID: \"823f6441-ed95-4f51-82c1-b8063d153460\") " pod="openstack/placement-db-create-7tfnm" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.516698 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prrh7\" (UniqueName: \"kubernetes.io/projected/d4c577ca-985a-4041-a06e-f987c0cd3608-kube-api-access-prrh7\") pod \"placement-3a23-account-create-update-tl98n\" (UID: \"d4c577ca-985a-4041-a06e-f987c0cd3608\") " pod="openstack/placement-3a23-account-create-update-tl98n" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.516740 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4c577ca-985a-4041-a06e-f987c0cd3608-operator-scripts\") pod \"placement-3a23-account-create-update-tl98n\" (UID: \"d4c577ca-985a-4041-a06e-f987c0cd3608\") " pod="openstack/placement-3a23-account-create-update-tl98n" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.517361 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/823f6441-ed95-4f51-82c1-b8063d153460-operator-scripts\") pod \"placement-db-create-7tfnm\" (UID: \"823f6441-ed95-4f51-82c1-b8063d153460\") " pod="openstack/placement-db-create-7tfnm" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.532166 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2qmp\" (UniqueName: \"kubernetes.io/projected/823f6441-ed95-4f51-82c1-b8063d153460-kube-api-access-t2qmp\") pod \"placement-db-create-7tfnm\" (UID: \"823f6441-ed95-4f51-82c1-b8063d153460\") " pod="openstack/placement-db-create-7tfnm" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.535529 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-98e5-account-create-update-hhlbm" event={"ID":"4682a2f1-a646-4f7e-9b03-578bbe315f48","Type":"ContainerStarted","Data":"e86a63869b839ad5d7fa57af58598efa01900b36c230e17ada4cc2ecc7e99f9b"} Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.540255 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"22183a64-a68c-47af-8352-b04603981c9d","Type":"ContainerStarted","Data":"cdbefee0a96018b91ebb6f9c58a410ed636d627c4b0d591c55a0980027a4b536"} Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.619039 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prrh7\" (UniqueName: \"kubernetes.io/projected/d4c577ca-985a-4041-a06e-f987c0cd3608-kube-api-access-prrh7\") pod \"placement-3a23-account-create-update-tl98n\" (UID: \"d4c577ca-985a-4041-a06e-f987c0cd3608\") " pod="openstack/placement-3a23-account-create-update-tl98n" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.619127 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4c577ca-985a-4041-a06e-f987c0cd3608-operator-scripts\") pod \"placement-3a23-account-create-update-tl98n\" (UID: \"d4c577ca-985a-4041-a06e-f987c0cd3608\") " pod="openstack/placement-3a23-account-create-update-tl98n" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.619800 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4c577ca-985a-4041-a06e-f987c0cd3608-operator-scripts\") pod \"placement-3a23-account-create-update-tl98n\" (UID: \"d4c577ca-985a-4041-a06e-f987c0cd3608\") " pod="openstack/placement-3a23-account-create-update-tl98n" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.634581 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prrh7\" (UniqueName: \"kubernetes.io/projected/d4c577ca-985a-4041-a06e-f987c0cd3608-kube-api-access-prrh7\") pod \"placement-3a23-account-create-update-tl98n\" (UID: \"d4c577ca-985a-4041-a06e-f987c0cd3608\") " pod="openstack/placement-3a23-account-create-update-tl98n" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.762908 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7tfnm" Feb 18 00:52:05 crc kubenswrapper[4858]: I0218 00:52:05.771631 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3a23-account-create-update-tl98n" Feb 18 00:52:06 crc kubenswrapper[4858]: W0218 00:52:06.262803 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf78e85b8_d7a3_4b15_991b_6104ba1ffe95.slice/crio-3bc105847d410e72bea06bb83b64a3d493cf092c72514d82d37f4a44efcfc7c4 WatchSource:0}: Error finding container 3bc105847d410e72bea06bb83b64a3d493cf092c72514d82d37f4a44efcfc7c4: Status 404 returned error can't find the container with id 3bc105847d410e72bea06bb83b64a3d493cf092c72514d82d37f4a44efcfc7c4 Feb 18 00:52:06 crc kubenswrapper[4858]: I0218 00:52:06.551244 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"675206cd-1619-4598-81a9-96b66d09ce88","Type":"ContainerStarted","Data":"94ef0f112afbbed93175a754c3c0323cc4aaef125fe8d7efcfedd05c0e736362"} Feb 18 00:52:06 crc kubenswrapper[4858]: I0218 00:52:06.552812 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7pckt" event={"ID":"f78e85b8-d7a3-4b15-991b-6104ba1ffe95","Type":"ContainerStarted","Data":"3bc105847d410e72bea06bb83b64a3d493cf092c72514d82d37f4a44efcfc7c4"} Feb 18 00:52:06 crc kubenswrapper[4858]: I0218 00:52:06.577678 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-nzjqm" Feb 18 00:52:06 crc kubenswrapper[4858]: I0218 00:52:06.681245 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-f8p59"] Feb 18 00:52:06 crc kubenswrapper[4858]: I0218 00:52:06.681457 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" podUID="60616614-0eb3-4b32-8ccd-1164a699b407" containerName="dnsmasq-dns" containerID="cri-o://ecc11f06ea381eaf1af315380d1d40785000b6435e111122d89cd0f62add2c0b" gracePeriod=10 Feb 18 00:52:06 crc kubenswrapper[4858]: I0218 00:52:06.971086 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-v9f9c" Feb 18 00:52:07 crc kubenswrapper[4858]: I0218 00:52:07.061926 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9" Feb 18 00:52:07 crc kubenswrapper[4858]: I0218 00:52:07.564165 4858 generic.go:334] "Generic (PLEG): container finished" podID="4682a2f1-a646-4f7e-9b03-578bbe315f48" containerID="2d3fe8f13155b55798497843ec969545b53598a126ec18c8c80342abd5186a0e" exitCode=0 Feb 18 00:52:07 crc kubenswrapper[4858]: I0218 00:52:07.564478 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-98e5-account-create-update-hhlbm" event={"ID":"4682a2f1-a646-4f7e-9b03-578bbe315f48","Type":"ContainerDied","Data":"2d3fe8f13155b55798497843ec969545b53598a126ec18c8c80342abd5186a0e"} Feb 18 00:52:07 crc kubenswrapper[4858]: I0218 00:52:07.576889 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"22183a64-a68c-47af-8352-b04603981c9d","Type":"ContainerStarted","Data":"3608ad072adaee5850c6acfa4165cd3b61bcd17a11cb78c0d9470e4f88784e30"} Feb 18 00:52:07 crc kubenswrapper[4858]: I0218 00:52:07.577014 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Feb 18 00:52:07 crc kubenswrapper[4858]: I0218 00:52:07.597037 4858 generic.go:334] "Generic (PLEG): container finished" podID="60616614-0eb3-4b32-8ccd-1164a699b407" containerID="ecc11f06ea381eaf1af315380d1d40785000b6435e111122d89cd0f62add2c0b" exitCode=0 Feb 18 00:52:07 crc kubenswrapper[4858]: I0218 00:52:07.597080 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" event={"ID":"60616614-0eb3-4b32-8ccd-1164a699b407","Type":"ContainerDied","Data":"ecc11f06ea381eaf1af315380d1d40785000b6435e111122d89cd0f62add2c0b"} Feb 18 00:52:07 crc kubenswrapper[4858]: I0218 00:52:07.624478 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=15.154672176 podStartE2EDuration="51.624463203s" podCreationTimestamp="2026-02-18 00:51:16 +0000 UTC" firstStartedPulling="2026-02-18 00:51:27.779161583 +0000 UTC m=+1041.084998315" lastFinishedPulling="2026-02-18 00:52:04.24895261 +0000 UTC m=+1077.554789342" observedRunningTime="2026-02-18 00:52:07.618125438 +0000 UTC m=+1080.923962170" watchObservedRunningTime="2026-02-18 00:52:07.624463203 +0000 UTC m=+1080.930299935" Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.169354 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.262778 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="c716bb3e-01b1-4bc7-a9a2-4604faf684f0" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.282415 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60616614-0eb3-4b32-8ccd-1164a699b407-ovsdbserver-sb\") pod \"60616614-0eb3-4b32-8ccd-1164a699b407\" (UID: \"60616614-0eb3-4b32-8ccd-1164a699b407\") " Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.282536 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60616614-0eb3-4b32-8ccd-1164a699b407-dns-svc\") pod \"60616614-0eb3-4b32-8ccd-1164a699b407\" (UID: \"60616614-0eb3-4b32-8ccd-1164a699b407\") " Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.282560 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60616614-0eb3-4b32-8ccd-1164a699b407-config\") pod \"60616614-0eb3-4b32-8ccd-1164a699b407\" (UID: \"60616614-0eb3-4b32-8ccd-1164a699b407\") " Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.282660 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60616614-0eb3-4b32-8ccd-1164a699b407-ovsdbserver-nb\") pod \"60616614-0eb3-4b32-8ccd-1164a699b407\" (UID: \"60616614-0eb3-4b32-8ccd-1164a699b407\") " Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.282755 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xdnt\" (UniqueName: \"kubernetes.io/projected/60616614-0eb3-4b32-8ccd-1164a699b407-kube-api-access-8xdnt\") pod \"60616614-0eb3-4b32-8ccd-1164a699b407\" (UID: \"60616614-0eb3-4b32-8ccd-1164a699b407\") " Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.288078 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60616614-0eb3-4b32-8ccd-1164a699b407-kube-api-access-8xdnt" (OuterVolumeSpecName: "kube-api-access-8xdnt") pod "60616614-0eb3-4b32-8ccd-1164a699b407" (UID: "60616614-0eb3-4b32-8ccd-1164a699b407"). InnerVolumeSpecName "kube-api-access-8xdnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.323248 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60616614-0eb3-4b32-8ccd-1164a699b407-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "60616614-0eb3-4b32-8ccd-1164a699b407" (UID: "60616614-0eb3-4b32-8ccd-1164a699b407"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.325906 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60616614-0eb3-4b32-8ccd-1164a699b407-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "60616614-0eb3-4b32-8ccd-1164a699b407" (UID: "60616614-0eb3-4b32-8ccd-1164a699b407"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.326407 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60616614-0eb3-4b32-8ccd-1164a699b407-config" (OuterVolumeSpecName: "config") pod "60616614-0eb3-4b32-8ccd-1164a699b407" (UID: "60616614-0eb3-4b32-8ccd-1164a699b407"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.334512 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60616614-0eb3-4b32-8ccd-1164a699b407-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "60616614-0eb3-4b32-8ccd-1164a699b407" (UID: "60616614-0eb3-4b32-8ccd-1164a699b407"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.346119 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.379153 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.392355 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60616614-0eb3-4b32-8ccd-1164a699b407-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.392389 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60616614-0eb3-4b32-8ccd-1164a699b407-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.392402 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60616614-0eb3-4b32-8ccd-1164a699b407-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.392417 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xdnt\" (UniqueName: \"kubernetes.io/projected/60616614-0eb3-4b32-8ccd-1164a699b407-kube-api-access-8xdnt\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.392427 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60616614-0eb3-4b32-8ccd-1164a699b407-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.453630 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5l2bs"] Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.460652 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3a23-account-create-update-tl98n"] Feb 18 00:52:08 crc kubenswrapper[4858]: W0218 00:52:08.461654 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4c577ca_985a_4041_a06e_f987c0cd3608.slice/crio-bbbae3e0555614556ce2f5586bedb4e44817b733fdc6ec18a123a75060ba9646 WatchSource:0}: Error finding container bbbae3e0555614556ce2f5586bedb4e44817b733fdc6ec18a123a75060ba9646: Status 404 returned error can't find the container with id bbbae3e0555614556ce2f5586bedb4e44817b733fdc6ec18a123a75060ba9646 Feb 18 00:52:08 crc kubenswrapper[4858]: W0218 00:52:08.464672 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bdc4a39_ee6d_47eb_bb82_665a206a9690.slice/crio-a2c7d76261072e233df6b005b6ebdfa9e27a5d6030e2f1803aafc952627db30b WatchSource:0}: Error finding container a2c7d76261072e233df6b005b6ebdfa9e27a5d6030e2f1803aafc952627db30b: Status 404 returned error can't find the container with id a2c7d76261072e233df6b005b6ebdfa9e27a5d6030e2f1803aafc952627db30b Feb 18 00:52:08 crc kubenswrapper[4858]: W0218 00:52:08.469607 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eba52de_f7c0_4843_941d_20a57d0e012b.slice/crio-50837ab7e084f30fe7199f4c877becfd814c22e5b1a42ed6d6d9637317e822df WatchSource:0}: Error finding container 50837ab7e084f30fe7199f4c877becfd814c22e5b1a42ed6d6d9637317e822df: Status 404 returned error can't find the container with id 50837ab7e084f30fe7199f4c877becfd814c22e5b1a42ed6d6d9637317e822df Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.471075 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2c0e-account-create-update-4kjnq"] Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.471336 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.472210 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.638023 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" event={"ID":"60616614-0eb3-4b32-8ccd-1164a699b407","Type":"ContainerDied","Data":"f68213d2b85a61579893551517e469247fb74b27fb97ebe0955d5d2e79edb282"} Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.638257 4858 scope.go:117] "RemoveContainer" containerID="ecc11f06ea381eaf1af315380d1d40785000b6435e111122d89cd0f62add2c0b" Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.638061 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-f8p59" Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.680299 4858 generic.go:334] "Generic (PLEG): container finished" podID="f78e85b8-d7a3-4b15-991b-6104ba1ffe95" containerID="cf56eb1ce15076ec70ab293ad790c0773f7d8ed199c775ea3e24ceb0351914c3" exitCode=0 Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.680356 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7pckt" event={"ID":"f78e85b8-d7a3-4b15-991b-6104ba1ffe95","Type":"ContainerDied","Data":"cf56eb1ce15076ec70ab293ad790c0773f7d8ed199c775ea3e24ceb0351914c3"} Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.696063 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3a23-account-create-update-tl98n" event={"ID":"d4c577ca-985a-4041-a06e-f987c0cd3608","Type":"ContainerStarted","Data":"bbbae3e0555614556ce2f5586bedb4e44817b733fdc6ec18a123a75060ba9646"} Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.701112 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5l2bs" event={"ID":"8eba52de-f7c0-4843-941d-20a57d0e012b","Type":"ContainerStarted","Data":"50837ab7e084f30fe7199f4c877becfd814c22e5b1a42ed6d6d9637317e822df"} Feb 18 00:52:08 crc kubenswrapper[4858]: W0218 00:52:08.707853 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod823f6441_ed95_4f51_82c1_b8063d153460.slice/crio-8766c5707dfa2795cd1a8714bf9499de8e00c8e2d085bc18132bcfa65245dba1 WatchSource:0}: Error finding container 8766c5707dfa2795cd1a8714bf9499de8e00c8e2d085bc18132bcfa65245dba1: Status 404 returned error can't find the container with id 8766c5707dfa2795cd1a8714bf9499de8e00c8e2d085bc18132bcfa65245dba1 Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.707861 4858 scope.go:117] "RemoveContainer" containerID="1585e2231134895bf6d21461e0ce8c8f7218ea273069f62079ef736b0cef8e39" Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.707897 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-7tfnm"] Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.709697 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gc9g7" event={"ID":"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2","Type":"ContainerStarted","Data":"0694bdf88dc812ac72063928621d9d239c2b065aa50791fee2c74216d89b38f0"} Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.714760 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2c0e-account-create-update-4kjnq" event={"ID":"2bdc4a39-ee6d-47eb-bb82-665a206a9690","Type":"ContainerStarted","Data":"a2c7d76261072e233df6b005b6ebdfa9e27a5d6030e2f1803aafc952627db30b"} Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.719873 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-f8p59"] Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.725271 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.737418 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-f8p59"] Feb 18 00:52:08 crc kubenswrapper[4858]: I0218 00:52:08.758133 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-gc9g7" podStartSLOduration=6.516668953 podStartE2EDuration="11.75811244s" podCreationTimestamp="2026-02-18 00:51:57 +0000 UTC" firstStartedPulling="2026-02-18 00:52:02.735944686 +0000 UTC m=+1076.041781418" lastFinishedPulling="2026-02-18 00:52:07.977388173 +0000 UTC m=+1081.283224905" observedRunningTime="2026-02-18 00:52:08.737093587 +0000 UTC m=+1082.042930319" watchObservedRunningTime="2026-02-18 00:52:08.75811244 +0000 UTC m=+1082.063949172" Feb 18 00:52:09 crc kubenswrapper[4858]: I0218 00:52:09.363375 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-98e5-account-create-update-hhlbm" Feb 18 00:52:09 crc kubenswrapper[4858]: I0218 00:52:09.419008 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4682a2f1-a646-4f7e-9b03-578bbe315f48-operator-scripts\") pod \"4682a2f1-a646-4f7e-9b03-578bbe315f48\" (UID: \"4682a2f1-a646-4f7e-9b03-578bbe315f48\") " Feb 18 00:52:09 crc kubenswrapper[4858]: I0218 00:52:09.419074 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmbmh\" (UniqueName: \"kubernetes.io/projected/4682a2f1-a646-4f7e-9b03-578bbe315f48-kube-api-access-dmbmh\") pod \"4682a2f1-a646-4f7e-9b03-578bbe315f48\" (UID: \"4682a2f1-a646-4f7e-9b03-578bbe315f48\") " Feb 18 00:52:09 crc kubenswrapper[4858]: I0218 00:52:09.423165 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4682a2f1-a646-4f7e-9b03-578bbe315f48-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4682a2f1-a646-4f7e-9b03-578bbe315f48" (UID: "4682a2f1-a646-4f7e-9b03-578bbe315f48"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:09 crc kubenswrapper[4858]: I0218 00:52:09.426619 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4682a2f1-a646-4f7e-9b03-578bbe315f48-kube-api-access-dmbmh" (OuterVolumeSpecName: "kube-api-access-dmbmh") pod "4682a2f1-a646-4f7e-9b03-578bbe315f48" (UID: "4682a2f1-a646-4f7e-9b03-578bbe315f48"). InnerVolumeSpecName "kube-api-access-dmbmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:09 crc kubenswrapper[4858]: I0218 00:52:09.428550 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60616614-0eb3-4b32-8ccd-1164a699b407" path="/var/lib/kubelet/pods/60616614-0eb3-4b32-8ccd-1164a699b407/volumes" Feb 18 00:52:09 crc kubenswrapper[4858]: I0218 00:52:09.521004 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4682a2f1-a646-4f7e-9b03-578bbe315f48-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:09 crc kubenswrapper[4858]: I0218 00:52:09.521042 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmbmh\" (UniqueName: \"kubernetes.io/projected/4682a2f1-a646-4f7e-9b03-578bbe315f48-kube-api-access-dmbmh\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:09 crc kubenswrapper[4858]: I0218 00:52:09.725848 4858 generic.go:334] "Generic (PLEG): container finished" podID="d4c577ca-985a-4041-a06e-f987c0cd3608" containerID="41652346b6608509348e5a146d3c7364e62b3ccdacc268a0b7914778d0e36bd5" exitCode=0 Feb 18 00:52:09 crc kubenswrapper[4858]: I0218 00:52:09.726185 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3a23-account-create-update-tl98n" event={"ID":"d4c577ca-985a-4041-a06e-f987c0cd3608","Type":"ContainerDied","Data":"41652346b6608509348e5a146d3c7364e62b3ccdacc268a0b7914778d0e36bd5"} Feb 18 00:52:09 crc kubenswrapper[4858]: I0218 00:52:09.727613 4858 generic.go:334] "Generic (PLEG): container finished" podID="823f6441-ed95-4f51-82c1-b8063d153460" containerID="2b69a4de4e096880237a7f1a3d7385679f75c319ef580676f605d0998a665853" exitCode=0 Feb 18 00:52:09 crc kubenswrapper[4858]: I0218 00:52:09.727667 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7tfnm" event={"ID":"823f6441-ed95-4f51-82c1-b8063d153460","Type":"ContainerDied","Data":"2b69a4de4e096880237a7f1a3d7385679f75c319ef580676f605d0998a665853"} Feb 18 00:52:09 crc kubenswrapper[4858]: I0218 00:52:09.727691 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7tfnm" event={"ID":"823f6441-ed95-4f51-82c1-b8063d153460","Type":"ContainerStarted","Data":"8766c5707dfa2795cd1a8714bf9499de8e00c8e2d085bc18132bcfa65245dba1"} Feb 18 00:52:09 crc kubenswrapper[4858]: I0218 00:52:09.730972 4858 generic.go:334] "Generic (PLEG): container finished" podID="8eba52de-f7c0-4843-941d-20a57d0e012b" containerID="0443d1ea6e1d1456c9f5549b6bef28924e20efcd54eaf8fe392b0298c8eea250" exitCode=0 Feb 18 00:52:09 crc kubenswrapper[4858]: I0218 00:52:09.731082 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5l2bs" event={"ID":"8eba52de-f7c0-4843-941d-20a57d0e012b","Type":"ContainerDied","Data":"0443d1ea6e1d1456c9f5549b6bef28924e20efcd54eaf8fe392b0298c8eea250"} Feb 18 00:52:09 crc kubenswrapper[4858]: I0218 00:52:09.732972 4858 generic.go:334] "Generic (PLEG): container finished" podID="2bdc4a39-ee6d-47eb-bb82-665a206a9690" containerID="9e9eea4a7486b91528e077dcbfef2001f0332c3caa34c86366263d261db70bc0" exitCode=0 Feb 18 00:52:09 crc kubenswrapper[4858]: I0218 00:52:09.733062 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2c0e-account-create-update-4kjnq" event={"ID":"2bdc4a39-ee6d-47eb-bb82-665a206a9690","Type":"ContainerDied","Data":"9e9eea4a7486b91528e077dcbfef2001f0332c3caa34c86366263d261db70bc0"} Feb 18 00:52:09 crc kubenswrapper[4858]: I0218 00:52:09.735016 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-98e5-account-create-update-hhlbm" Feb 18 00:52:09 crc kubenswrapper[4858]: I0218 00:52:09.735029 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-98e5-account-create-update-hhlbm" event={"ID":"4682a2f1-a646-4f7e-9b03-578bbe315f48","Type":"ContainerDied","Data":"e86a63869b839ad5d7fa57af58598efa01900b36c230e17ada4cc2ecc7e99f9b"} Feb 18 00:52:09 crc kubenswrapper[4858]: I0218 00:52:09.735054 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e86a63869b839ad5d7fa57af58598efa01900b36c230e17ada4cc2ecc7e99f9b" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.370762 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.632549 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-6cwjs"] Feb 18 00:52:10 crc kubenswrapper[4858]: E0218 00:52:10.632938 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60616614-0eb3-4b32-8ccd-1164a699b407" containerName="dnsmasq-dns" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.632952 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="60616614-0eb3-4b32-8ccd-1164a699b407" containerName="dnsmasq-dns" Feb 18 00:52:10 crc kubenswrapper[4858]: E0218 00:52:10.632963 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4682a2f1-a646-4f7e-9b03-578bbe315f48" containerName="mariadb-account-create-update" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.632969 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4682a2f1-a646-4f7e-9b03-578bbe315f48" containerName="mariadb-account-create-update" Feb 18 00:52:10 crc kubenswrapper[4858]: E0218 00:52:10.632988 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60616614-0eb3-4b32-8ccd-1164a699b407" containerName="init" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.632994 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="60616614-0eb3-4b32-8ccd-1164a699b407" containerName="init" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.633164 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="60616614-0eb3-4b32-8ccd-1164a699b407" containerName="dnsmasq-dns" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.633179 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4682a2f1-a646-4f7e-9b03-578bbe315f48" containerName="mariadb-account-create-update" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.633830 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6cwjs" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.635700 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.668483 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6cwjs"] Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.750326 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48tgb\" (UniqueName: \"kubernetes.io/projected/de9788a4-d35f-4b1a-a097-7392bcc1e091-kube-api-access-48tgb\") pod \"root-account-create-update-6cwjs\" (UID: \"de9788a4-d35f-4b1a-a097-7392bcc1e091\") " pod="openstack/root-account-create-update-6cwjs" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.753941 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de9788a4-d35f-4b1a-a097-7392bcc1e091-operator-scripts\") pod \"root-account-create-update-6cwjs\" (UID: \"de9788a4-d35f-4b1a-a097-7392bcc1e091\") " pod="openstack/root-account-create-update-6cwjs" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.780597 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.782659 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.784703 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-5rjtk" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.785759 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.786327 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.786855 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.789786 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.856247 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/53a84120-080a-41f4-a4de-e52521c976c8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"53a84120-080a-41f4-a4de-e52521c976c8\") " pod="openstack/ovn-northd-0" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.856321 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/53a84120-080a-41f4-a4de-e52521c976c8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"53a84120-080a-41f4-a4de-e52521c976c8\") " pod="openstack/ovn-northd-0" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.856355 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53a84120-080a-41f4-a4de-e52521c976c8-config\") pod \"ovn-northd-0\" (UID: \"53a84120-080a-41f4-a4de-e52521c976c8\") " pod="openstack/ovn-northd-0" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.856408 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/53a84120-080a-41f4-a4de-e52521c976c8-scripts\") pod \"ovn-northd-0\" (UID: \"53a84120-080a-41f4-a4de-e52521c976c8\") " pod="openstack/ovn-northd-0" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.856737 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl4nj\" (UniqueName: \"kubernetes.io/projected/53a84120-080a-41f4-a4de-e52521c976c8-kube-api-access-bl4nj\") pod \"ovn-northd-0\" (UID: \"53a84120-080a-41f4-a4de-e52521c976c8\") " pod="openstack/ovn-northd-0" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.856848 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a84120-080a-41f4-a4de-e52521c976c8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"53a84120-080a-41f4-a4de-e52521c976c8\") " pod="openstack/ovn-northd-0" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.857084 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48tgb\" (UniqueName: \"kubernetes.io/projected/de9788a4-d35f-4b1a-a097-7392bcc1e091-kube-api-access-48tgb\") pod \"root-account-create-update-6cwjs\" (UID: \"de9788a4-d35f-4b1a-a097-7392bcc1e091\") " pod="openstack/root-account-create-update-6cwjs" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.857256 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de9788a4-d35f-4b1a-a097-7392bcc1e091-operator-scripts\") pod \"root-account-create-update-6cwjs\" (UID: \"de9788a4-d35f-4b1a-a097-7392bcc1e091\") " pod="openstack/root-account-create-update-6cwjs" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.857310 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/53a84120-080a-41f4-a4de-e52521c976c8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"53a84120-080a-41f4-a4de-e52521c976c8\") " pod="openstack/ovn-northd-0" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.859615 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de9788a4-d35f-4b1a-a097-7392bcc1e091-operator-scripts\") pod \"root-account-create-update-6cwjs\" (UID: \"de9788a4-d35f-4b1a-a097-7392bcc1e091\") " pod="openstack/root-account-create-update-6cwjs" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.889625 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48tgb\" (UniqueName: \"kubernetes.io/projected/de9788a4-d35f-4b1a-a097-7392bcc1e091-kube-api-access-48tgb\") pod \"root-account-create-update-6cwjs\" (UID: \"de9788a4-d35f-4b1a-a097-7392bcc1e091\") " pod="openstack/root-account-create-update-6cwjs" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.960442 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/53a84120-080a-41f4-a4de-e52521c976c8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"53a84120-080a-41f4-a4de-e52521c976c8\") " pod="openstack/ovn-northd-0" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.960530 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/53a84120-080a-41f4-a4de-e52521c976c8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"53a84120-080a-41f4-a4de-e52521c976c8\") " pod="openstack/ovn-northd-0" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.960560 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/53a84120-080a-41f4-a4de-e52521c976c8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"53a84120-080a-41f4-a4de-e52521c976c8\") " pod="openstack/ovn-northd-0" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.960587 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53a84120-080a-41f4-a4de-e52521c976c8-config\") pod \"ovn-northd-0\" (UID: \"53a84120-080a-41f4-a4de-e52521c976c8\") " pod="openstack/ovn-northd-0" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.960622 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/53a84120-080a-41f4-a4de-e52521c976c8-scripts\") pod \"ovn-northd-0\" (UID: \"53a84120-080a-41f4-a4de-e52521c976c8\") " pod="openstack/ovn-northd-0" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.960659 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl4nj\" (UniqueName: \"kubernetes.io/projected/53a84120-080a-41f4-a4de-e52521c976c8-kube-api-access-bl4nj\") pod \"ovn-northd-0\" (UID: \"53a84120-080a-41f4-a4de-e52521c976c8\") " pod="openstack/ovn-northd-0" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.960720 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a84120-080a-41f4-a4de-e52521c976c8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"53a84120-080a-41f4-a4de-e52521c976c8\") " pod="openstack/ovn-northd-0" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.964348 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a84120-080a-41f4-a4de-e52521c976c8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"53a84120-080a-41f4-a4de-e52521c976c8\") " pod="openstack/ovn-northd-0" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.965251 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53a84120-080a-41f4-a4de-e52521c976c8-config\") pod \"ovn-northd-0\" (UID: \"53a84120-080a-41f4-a4de-e52521c976c8\") " pod="openstack/ovn-northd-0" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.965958 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/53a84120-080a-41f4-a4de-e52521c976c8-scripts\") pod \"ovn-northd-0\" (UID: \"53a84120-080a-41f4-a4de-e52521c976c8\") " pod="openstack/ovn-northd-0" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.975730 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/53a84120-080a-41f4-a4de-e52521c976c8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"53a84120-080a-41f4-a4de-e52521c976c8\") " pod="openstack/ovn-northd-0" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.976142 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6cwjs" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.979112 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/53a84120-080a-41f4-a4de-e52521c976c8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"53a84120-080a-41f4-a4de-e52521c976c8\") " pod="openstack/ovn-northd-0" Feb 18 00:52:10 crc kubenswrapper[4858]: I0218 00:52:10.979760 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/53a84120-080a-41f4-a4de-e52521c976c8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"53a84120-080a-41f4-a4de-e52521c976c8\") " pod="openstack/ovn-northd-0" Feb 18 00:52:11 crc kubenswrapper[4858]: I0218 00:52:11.001006 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl4nj\" (UniqueName: \"kubernetes.io/projected/53a84120-080a-41f4-a4de-e52521c976c8-kube-api-access-bl4nj\") pod \"ovn-northd-0\" (UID: \"53a84120-080a-41f4-a4de-e52521c976c8\") " pod="openstack/ovn-northd-0" Feb 18 00:52:11 crc kubenswrapper[4858]: I0218 00:52:11.116304 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.123219 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2c0e-account-create-update-4kjnq" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.132708 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7pckt" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.155923 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7tfnm" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.166782 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5l2bs" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.186094 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3a23-account-create-update-tl98n" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.188924 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f78e85b8-d7a3-4b15-991b-6104ba1ffe95-operator-scripts\") pod \"f78e85b8-d7a3-4b15-991b-6104ba1ffe95\" (UID: \"f78e85b8-d7a3-4b15-991b-6104ba1ffe95\") " Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.189020 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7czr\" (UniqueName: \"kubernetes.io/projected/2bdc4a39-ee6d-47eb-bb82-665a206a9690-kube-api-access-p7czr\") pod \"2bdc4a39-ee6d-47eb-bb82-665a206a9690\" (UID: \"2bdc4a39-ee6d-47eb-bb82-665a206a9690\") " Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.189065 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bdc4a39-ee6d-47eb-bb82-665a206a9690-operator-scripts\") pod \"2bdc4a39-ee6d-47eb-bb82-665a206a9690\" (UID: \"2bdc4a39-ee6d-47eb-bb82-665a206a9690\") " Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.189134 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2bhj\" (UniqueName: \"kubernetes.io/projected/f78e85b8-d7a3-4b15-991b-6104ba1ffe95-kube-api-access-x2bhj\") pod \"f78e85b8-d7a3-4b15-991b-6104ba1ffe95\" (UID: \"f78e85b8-d7a3-4b15-991b-6104ba1ffe95\") " Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.190818 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f78e85b8-d7a3-4b15-991b-6104ba1ffe95-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f78e85b8-d7a3-4b15-991b-6104ba1ffe95" (UID: "f78e85b8-d7a3-4b15-991b-6104ba1ffe95"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.191132 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bdc4a39-ee6d-47eb-bb82-665a206a9690-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2bdc4a39-ee6d-47eb-bb82-665a206a9690" (UID: "2bdc4a39-ee6d-47eb-bb82-665a206a9690"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.211693 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bdc4a39-ee6d-47eb-bb82-665a206a9690-kube-api-access-p7czr" (OuterVolumeSpecName: "kube-api-access-p7czr") pod "2bdc4a39-ee6d-47eb-bb82-665a206a9690" (UID: "2bdc4a39-ee6d-47eb-bb82-665a206a9690"). InnerVolumeSpecName "kube-api-access-p7czr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.213745 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f78e85b8-d7a3-4b15-991b-6104ba1ffe95-kube-api-access-x2bhj" (OuterVolumeSpecName: "kube-api-access-x2bhj") pod "f78e85b8-d7a3-4b15-991b-6104ba1ffe95" (UID: "f78e85b8-d7a3-4b15-991b-6104ba1ffe95"). InnerVolumeSpecName "kube-api-access-x2bhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.290281 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/823f6441-ed95-4f51-82c1-b8063d153460-operator-scripts\") pod \"823f6441-ed95-4f51-82c1-b8063d153460\" (UID: \"823f6441-ed95-4f51-82c1-b8063d153460\") " Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.290568 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2qmp\" (UniqueName: \"kubernetes.io/projected/823f6441-ed95-4f51-82c1-b8063d153460-kube-api-access-t2qmp\") pod \"823f6441-ed95-4f51-82c1-b8063d153460\" (UID: \"823f6441-ed95-4f51-82c1-b8063d153460\") " Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.290672 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eba52de-f7c0-4843-941d-20a57d0e012b-operator-scripts\") pod \"8eba52de-f7c0-4843-941d-20a57d0e012b\" (UID: \"8eba52de-f7c0-4843-941d-20a57d0e012b\") " Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.290695 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prrh7\" (UniqueName: \"kubernetes.io/projected/d4c577ca-985a-4041-a06e-f987c0cd3608-kube-api-access-prrh7\") pod \"d4c577ca-985a-4041-a06e-f987c0cd3608\" (UID: \"d4c577ca-985a-4041-a06e-f987c0cd3608\") " Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.290825 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4c577ca-985a-4041-a06e-f987c0cd3608-operator-scripts\") pod \"d4c577ca-985a-4041-a06e-f987c0cd3608\" (UID: \"d4c577ca-985a-4041-a06e-f987c0cd3608\") " Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.290873 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vj8q\" (UniqueName: \"kubernetes.io/projected/8eba52de-f7c0-4843-941d-20a57d0e012b-kube-api-access-4vj8q\") pod \"8eba52de-f7c0-4843-941d-20a57d0e012b\" (UID: \"8eba52de-f7c0-4843-941d-20a57d0e012b\") " Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.290876 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/823f6441-ed95-4f51-82c1-b8063d153460-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "823f6441-ed95-4f51-82c1-b8063d153460" (UID: "823f6441-ed95-4f51-82c1-b8063d153460"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.291046 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eba52de-f7c0-4843-941d-20a57d0e012b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8eba52de-f7c0-4843-941d-20a57d0e012b" (UID: "8eba52de-f7c0-4843-941d-20a57d0e012b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.291165 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4c577ca-985a-4041-a06e-f987c0cd3608-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d4c577ca-985a-4041-a06e-f987c0cd3608" (UID: "d4c577ca-985a-4041-a06e-f987c0cd3608"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.291642 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2bhj\" (UniqueName: \"kubernetes.io/projected/f78e85b8-d7a3-4b15-991b-6104ba1ffe95-kube-api-access-x2bhj\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.291665 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4c577ca-985a-4041-a06e-f987c0cd3608-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.291675 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f78e85b8-d7a3-4b15-991b-6104ba1ffe95-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.291683 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/823f6441-ed95-4f51-82c1-b8063d153460-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.291693 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7czr\" (UniqueName: \"kubernetes.io/projected/2bdc4a39-ee6d-47eb-bb82-665a206a9690-kube-api-access-p7czr\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.291702 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eba52de-f7c0-4843-941d-20a57d0e012b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.291711 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bdc4a39-ee6d-47eb-bb82-665a206a9690-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.295647 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eba52de-f7c0-4843-941d-20a57d0e012b-kube-api-access-4vj8q" (OuterVolumeSpecName: "kube-api-access-4vj8q") pod "8eba52de-f7c0-4843-941d-20a57d0e012b" (UID: "8eba52de-f7c0-4843-941d-20a57d0e012b"). InnerVolumeSpecName "kube-api-access-4vj8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.295716 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/823f6441-ed95-4f51-82c1-b8063d153460-kube-api-access-t2qmp" (OuterVolumeSpecName: "kube-api-access-t2qmp") pod "823f6441-ed95-4f51-82c1-b8063d153460" (UID: "823f6441-ed95-4f51-82c1-b8063d153460"). InnerVolumeSpecName "kube-api-access-t2qmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.298243 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4c577ca-985a-4041-a06e-f987c0cd3608-kube-api-access-prrh7" (OuterVolumeSpecName: "kube-api-access-prrh7") pod "d4c577ca-985a-4041-a06e-f987c0cd3608" (UID: "d4c577ca-985a-4041-a06e-f987c0cd3608"). InnerVolumeSpecName "kube-api-access-prrh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.393712 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vj8q\" (UniqueName: \"kubernetes.io/projected/8eba52de-f7c0-4843-941d-20a57d0e012b-kube-api-access-4vj8q\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.393763 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2qmp\" (UniqueName: \"kubernetes.io/projected/823f6441-ed95-4f51-82c1-b8063d153460-kube-api-access-t2qmp\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.393780 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prrh7\" (UniqueName: \"kubernetes.io/projected/d4c577ca-985a-4041-a06e-f987c0cd3608-kube-api-access-prrh7\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.472196 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6cwjs"] Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.579829 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.771909 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3a23-account-create-update-tl98n" event={"ID":"d4c577ca-985a-4041-a06e-f987c0cd3608","Type":"ContainerDied","Data":"bbbae3e0555614556ce2f5586bedb4e44817b733fdc6ec18a123a75060ba9646"} Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.771935 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3a23-account-create-update-tl98n" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.771985 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbbae3e0555614556ce2f5586bedb4e44817b733fdc6ec18a123a75060ba9646" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.773551 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7tfnm" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.773545 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7tfnm" event={"ID":"823f6441-ed95-4f51-82c1-b8063d153460","Type":"ContainerDied","Data":"8766c5707dfa2795cd1a8714bf9499de8e00c8e2d085bc18132bcfa65245dba1"} Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.773700 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8766c5707dfa2795cd1a8714bf9499de8e00c8e2d085bc18132bcfa65245dba1" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.775154 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5l2bs" event={"ID":"8eba52de-f7c0-4843-941d-20a57d0e012b","Type":"ContainerDied","Data":"50837ab7e084f30fe7199f4c877becfd814c22e5b1a42ed6d6d9637317e822df"} Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.775206 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50837ab7e084f30fe7199f4c877becfd814c22e5b1a42ed6d6d9637317e822df" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.775171 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5l2bs" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.777729 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"53a84120-080a-41f4-a4de-e52521c976c8","Type":"ContainerStarted","Data":"9d5ae02e4c58e5e4b8dfab05d3b561bd2671a21ff8bb7a8cf972b1606b6e827a"} Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.779250 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6cwjs" event={"ID":"de9788a4-d35f-4b1a-a097-7392bcc1e091","Type":"ContainerStarted","Data":"9ec0343b228ed3380bff6d29318d38d5afee51a72eb3aa222f2d7954b2bcdde0"} Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.779300 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6cwjs" event={"ID":"de9788a4-d35f-4b1a-a097-7392bcc1e091","Type":"ContainerStarted","Data":"8b775b9100b49c96eb4a3f207a7c1471cfe1ef9d62df03c6736b28502c5d58c5"} Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.782258 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"675206cd-1619-4598-81a9-96b66d09ce88","Type":"ContainerStarted","Data":"cb9957228cdffa92328094e1aefbc5a143f684f8da6cebb43c8b318ccc79d4eb"} Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.783340 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2c0e-account-create-update-4kjnq" event={"ID":"2bdc4a39-ee6d-47eb-bb82-665a206a9690","Type":"ContainerDied","Data":"a2c7d76261072e233df6b005b6ebdfa9e27a5d6030e2f1803aafc952627db30b"} Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.783364 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2c7d76261072e233df6b005b6ebdfa9e27a5d6030e2f1803aafc952627db30b" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.783407 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2c0e-account-create-update-4kjnq" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.784390 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7pckt" event={"ID":"f78e85b8-d7a3-4b15-991b-6104ba1ffe95","Type":"ContainerDied","Data":"3bc105847d410e72bea06bb83b64a3d493cf092c72514d82d37f4a44efcfc7c4"} Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.784410 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bc105847d410e72bea06bb83b64a3d493cf092c72514d82d37f4a44efcfc7c4" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.784446 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7pckt" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.805454 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-6cwjs" podStartSLOduration=2.805439753 podStartE2EDuration="2.805439753s" podCreationTimestamp="2026-02-18 00:52:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:52:12.797424668 +0000 UTC m=+1086.103261400" watchObservedRunningTime="2026-02-18 00:52:12.805439753 +0000 UTC m=+1086.111276485" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.829419 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=12.67177648 podStartE2EDuration="56.829401358s" podCreationTimestamp="2026-02-18 00:51:16 +0000 UTC" firstStartedPulling="2026-02-18 00:51:27.825944254 +0000 UTC m=+1041.131780986" lastFinishedPulling="2026-02-18 00:52:11.983569142 +0000 UTC m=+1085.289405864" observedRunningTime="2026-02-18 00:52:12.822489919 +0000 UTC m=+1086.128326661" watchObservedRunningTime="2026-02-18 00:52:12.829401358 +0000 UTC m=+1086.135238100" Feb 18 00:52:12 crc kubenswrapper[4858]: I0218 00:52:12.832048 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:13 crc kubenswrapper[4858]: I0218 00:52:13.219515 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-etc-swift\") pod \"swift-storage-0\" (UID: \"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07\") " pod="openstack/swift-storage-0" Feb 18 00:52:13 crc kubenswrapper[4858]: E0218 00:52:13.220024 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 00:52:13 crc kubenswrapper[4858]: E0218 00:52:13.220038 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 00:52:13 crc kubenswrapper[4858]: E0218 00:52:13.220086 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-etc-swift podName:d0600ce0-ec0e-48b8-b22e-7f94ffd40c07 nodeName:}" failed. No retries permitted until 2026-02-18 00:52:29.220069659 +0000 UTC m=+1102.525906391 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-etc-swift") pod "swift-storage-0" (UID: "d0600ce0-ec0e-48b8-b22e-7f94ffd40c07") : configmap "swift-ring-files" not found Feb 18 00:52:13 crc kubenswrapper[4858]: I0218 00:52:13.800545 4858 generic.go:334] "Generic (PLEG): container finished" podID="de9788a4-d35f-4b1a-a097-7392bcc1e091" containerID="9ec0343b228ed3380bff6d29318d38d5afee51a72eb3aa222f2d7954b2bcdde0" exitCode=0 Feb 18 00:52:13 crc kubenswrapper[4858]: I0218 00:52:13.801151 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6cwjs" event={"ID":"de9788a4-d35f-4b1a-a097-7392bcc1e091","Type":"ContainerDied","Data":"9ec0343b228ed3380bff6d29318d38d5afee51a72eb3aa222f2d7954b2bcdde0"} Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.640008 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-gjnp4"] Feb 18 00:52:14 crc kubenswrapper[4858]: E0218 00:52:14.641096 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78e85b8-d7a3-4b15-991b-6104ba1ffe95" containerName="mariadb-database-create" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.641110 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78e85b8-d7a3-4b15-991b-6104ba1ffe95" containerName="mariadb-database-create" Feb 18 00:52:14 crc kubenswrapper[4858]: E0218 00:52:14.641118 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eba52de-f7c0-4843-941d-20a57d0e012b" containerName="mariadb-database-create" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.641123 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eba52de-f7c0-4843-941d-20a57d0e012b" containerName="mariadb-database-create" Feb 18 00:52:14 crc kubenswrapper[4858]: E0218 00:52:14.641136 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="823f6441-ed95-4f51-82c1-b8063d153460" containerName="mariadb-database-create" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.641144 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="823f6441-ed95-4f51-82c1-b8063d153460" containerName="mariadb-database-create" Feb 18 00:52:14 crc kubenswrapper[4858]: E0218 00:52:14.641157 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4c577ca-985a-4041-a06e-f987c0cd3608" containerName="mariadb-account-create-update" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.641163 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4c577ca-985a-4041-a06e-f987c0cd3608" containerName="mariadb-account-create-update" Feb 18 00:52:14 crc kubenswrapper[4858]: E0218 00:52:14.641187 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bdc4a39-ee6d-47eb-bb82-665a206a9690" containerName="mariadb-account-create-update" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.641193 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bdc4a39-ee6d-47eb-bb82-665a206a9690" containerName="mariadb-account-create-update" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.641350 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bdc4a39-ee6d-47eb-bb82-665a206a9690" containerName="mariadb-account-create-update" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.641361 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eba52de-f7c0-4843-941d-20a57d0e012b" containerName="mariadb-database-create" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.641378 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="823f6441-ed95-4f51-82c1-b8063d153460" containerName="mariadb-database-create" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.641386 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78e85b8-d7a3-4b15-991b-6104ba1ffe95" containerName="mariadb-database-create" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.641396 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4c577ca-985a-4041-a06e-f987c0cd3608" containerName="mariadb-account-create-update" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.642004 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gjnp4" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.645043 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.645614 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mlt6s" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.663744 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gjnp4"] Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.751605 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e80e88e1-21eb-46ff-9ee5-d22d3d589ecd-db-sync-config-data\") pod \"glance-db-sync-gjnp4\" (UID: \"e80e88e1-21eb-46ff-9ee5-d22d3d589ecd\") " pod="openstack/glance-db-sync-gjnp4" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.751708 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80e88e1-21eb-46ff-9ee5-d22d3d589ecd-combined-ca-bundle\") pod \"glance-db-sync-gjnp4\" (UID: \"e80e88e1-21eb-46ff-9ee5-d22d3d589ecd\") " pod="openstack/glance-db-sync-gjnp4" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.751746 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkr8p\" (UniqueName: \"kubernetes.io/projected/e80e88e1-21eb-46ff-9ee5-d22d3d589ecd-kube-api-access-hkr8p\") pod \"glance-db-sync-gjnp4\" (UID: \"e80e88e1-21eb-46ff-9ee5-d22d3d589ecd\") " pod="openstack/glance-db-sync-gjnp4" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.751895 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80e88e1-21eb-46ff-9ee5-d22d3d589ecd-config-data\") pod \"glance-db-sync-gjnp4\" (UID: \"e80e88e1-21eb-46ff-9ee5-d22d3d589ecd\") " pod="openstack/glance-db-sync-gjnp4" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.812020 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"53a84120-080a-41f4-a4de-e52521c976c8","Type":"ContainerStarted","Data":"ec05ffeef434fa9b5ddd8f3c474051098d688dd0bc60039963179412d6ad2d0e"} Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.812062 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"53a84120-080a-41f4-a4de-e52521c976c8","Type":"ContainerStarted","Data":"c2949be0082b9982061a39b0088a479bd4556d65722fd2436958742c05412742"} Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.832650 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.631346683 podStartE2EDuration="4.832625921s" podCreationTimestamp="2026-02-18 00:52:10 +0000 UTC" firstStartedPulling="2026-02-18 00:52:12.595686716 +0000 UTC m=+1085.901523448" lastFinishedPulling="2026-02-18 00:52:13.796965954 +0000 UTC m=+1087.102802686" observedRunningTime="2026-02-18 00:52:14.828590513 +0000 UTC m=+1088.134427245" watchObservedRunningTime="2026-02-18 00:52:14.832625921 +0000 UTC m=+1088.138462663" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.853808 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80e88e1-21eb-46ff-9ee5-d22d3d589ecd-config-data\") pod \"glance-db-sync-gjnp4\" (UID: \"e80e88e1-21eb-46ff-9ee5-d22d3d589ecd\") " pod="openstack/glance-db-sync-gjnp4" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.853926 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e80e88e1-21eb-46ff-9ee5-d22d3d589ecd-db-sync-config-data\") pod \"glance-db-sync-gjnp4\" (UID: \"e80e88e1-21eb-46ff-9ee5-d22d3d589ecd\") " pod="openstack/glance-db-sync-gjnp4" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.854162 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80e88e1-21eb-46ff-9ee5-d22d3d589ecd-combined-ca-bundle\") pod \"glance-db-sync-gjnp4\" (UID: \"e80e88e1-21eb-46ff-9ee5-d22d3d589ecd\") " pod="openstack/glance-db-sync-gjnp4" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.854213 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkr8p\" (UniqueName: \"kubernetes.io/projected/e80e88e1-21eb-46ff-9ee5-d22d3d589ecd-kube-api-access-hkr8p\") pod \"glance-db-sync-gjnp4\" (UID: \"e80e88e1-21eb-46ff-9ee5-d22d3d589ecd\") " pod="openstack/glance-db-sync-gjnp4" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.860647 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80e88e1-21eb-46ff-9ee5-d22d3d589ecd-config-data\") pod \"glance-db-sync-gjnp4\" (UID: \"e80e88e1-21eb-46ff-9ee5-d22d3d589ecd\") " pod="openstack/glance-db-sync-gjnp4" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.867450 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80e88e1-21eb-46ff-9ee5-d22d3d589ecd-combined-ca-bundle\") pod \"glance-db-sync-gjnp4\" (UID: \"e80e88e1-21eb-46ff-9ee5-d22d3d589ecd\") " pod="openstack/glance-db-sync-gjnp4" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.877673 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkr8p\" (UniqueName: \"kubernetes.io/projected/e80e88e1-21eb-46ff-9ee5-d22d3d589ecd-kube-api-access-hkr8p\") pod \"glance-db-sync-gjnp4\" (UID: \"e80e88e1-21eb-46ff-9ee5-d22d3d589ecd\") " pod="openstack/glance-db-sync-gjnp4" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.880174 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e80e88e1-21eb-46ff-9ee5-d22d3d589ecd-db-sync-config-data\") pod \"glance-db-sync-gjnp4\" (UID: \"e80e88e1-21eb-46ff-9ee5-d22d3d589ecd\") " pod="openstack/glance-db-sync-gjnp4" Feb 18 00:52:14 crc kubenswrapper[4858]: I0218 00:52:14.974113 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gjnp4" Feb 18 00:52:15 crc kubenswrapper[4858]: I0218 00:52:15.195536 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6cwjs" Feb 18 00:52:15 crc kubenswrapper[4858]: I0218 00:52:15.267035 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48tgb\" (UniqueName: \"kubernetes.io/projected/de9788a4-d35f-4b1a-a097-7392bcc1e091-kube-api-access-48tgb\") pod \"de9788a4-d35f-4b1a-a097-7392bcc1e091\" (UID: \"de9788a4-d35f-4b1a-a097-7392bcc1e091\") " Feb 18 00:52:15 crc kubenswrapper[4858]: I0218 00:52:15.267516 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de9788a4-d35f-4b1a-a097-7392bcc1e091-operator-scripts\") pod \"de9788a4-d35f-4b1a-a097-7392bcc1e091\" (UID: \"de9788a4-d35f-4b1a-a097-7392bcc1e091\") " Feb 18 00:52:15 crc kubenswrapper[4858]: I0218 00:52:15.268215 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de9788a4-d35f-4b1a-a097-7392bcc1e091-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de9788a4-d35f-4b1a-a097-7392bcc1e091" (UID: "de9788a4-d35f-4b1a-a097-7392bcc1e091"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:15 crc kubenswrapper[4858]: I0218 00:52:15.273876 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de9788a4-d35f-4b1a-a097-7392bcc1e091-kube-api-access-48tgb" (OuterVolumeSpecName: "kube-api-access-48tgb") pod "de9788a4-d35f-4b1a-a097-7392bcc1e091" (UID: "de9788a4-d35f-4b1a-a097-7392bcc1e091"). InnerVolumeSpecName "kube-api-access-48tgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:15 crc kubenswrapper[4858]: I0218 00:52:15.370892 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de9788a4-d35f-4b1a-a097-7392bcc1e091-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:15 crc kubenswrapper[4858]: I0218 00:52:15.370923 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48tgb\" (UniqueName: \"kubernetes.io/projected/de9788a4-d35f-4b1a-a097-7392bcc1e091-kube-api-access-48tgb\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:15 crc kubenswrapper[4858]: I0218 00:52:15.774600 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gjnp4"] Feb 18 00:52:15 crc kubenswrapper[4858]: W0218 00:52:15.775773 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode80e88e1_21eb_46ff_9ee5_d22d3d589ecd.slice/crio-499ec87a115457c7b5b27b64e7dcd052f0b6e172d423820bcb2b6c026a6263e1 WatchSource:0}: Error finding container 499ec87a115457c7b5b27b64e7dcd052f0b6e172d423820bcb2b6c026a6263e1: Status 404 returned error can't find the container with id 499ec87a115457c7b5b27b64e7dcd052f0b6e172d423820bcb2b6c026a6263e1 Feb 18 00:52:15 crc kubenswrapper[4858]: I0218 00:52:15.826545 4858 generic.go:334] "Generic (PLEG): container finished" podID="bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2" containerID="0694bdf88dc812ac72063928621d9d239c2b065aa50791fee2c74216d89b38f0" exitCode=0 Feb 18 00:52:15 crc kubenswrapper[4858]: I0218 00:52:15.826601 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gc9g7" event={"ID":"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2","Type":"ContainerDied","Data":"0694bdf88dc812ac72063928621d9d239c2b065aa50791fee2c74216d89b38f0"} Feb 18 00:52:15 crc kubenswrapper[4858]: I0218 00:52:15.828024 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gjnp4" event={"ID":"e80e88e1-21eb-46ff-9ee5-d22d3d589ecd","Type":"ContainerStarted","Data":"499ec87a115457c7b5b27b64e7dcd052f0b6e172d423820bcb2b6c026a6263e1"} Feb 18 00:52:15 crc kubenswrapper[4858]: I0218 00:52:15.830674 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6cwjs" Feb 18 00:52:15 crc kubenswrapper[4858]: I0218 00:52:15.834027 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6cwjs" event={"ID":"de9788a4-d35f-4b1a-a097-7392bcc1e091","Type":"ContainerDied","Data":"8b775b9100b49c96eb4a3f207a7c1471cfe1ef9d62df03c6736b28502c5d58c5"} Feb 18 00:52:15 crc kubenswrapper[4858]: I0218 00:52:15.834112 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b775b9100b49c96eb4a3f207a7c1471cfe1ef9d62df03c6736b28502c5d58c5" Feb 18 00:52:15 crc kubenswrapper[4858]: I0218 00:52:15.834151 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 18 00:52:16 crc kubenswrapper[4858]: I0218 00:52:16.297173 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 00:52:16 crc kubenswrapper[4858]: I0218 00:52:16.912735 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6mvr5" Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.119144 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-6cwjs"] Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.125475 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-6cwjs"] Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.307392 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gc9g7" Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.411622 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-combined-ca-bundle\") pod \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.411712 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-ring-data-devices\") pod \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.411783 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-scripts\") pod \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.411883 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-swiftconf\") pod \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.411899 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-dispersionconf\") pod \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.411922 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-etc-swift\") pod \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.412312 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2" (UID: "bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.412623 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmfng\" (UniqueName: \"kubernetes.io/projected/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-kube-api-access-dmfng\") pod \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\" (UID: \"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2\") " Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.413223 4858 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.413737 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2" (UID: "bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.418761 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-kube-api-access-dmfng" (OuterVolumeSpecName: "kube-api-access-dmfng") pod "bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2" (UID: "bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2"). InnerVolumeSpecName "kube-api-access-dmfng". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.419089 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2" (UID: "bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.433852 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de9788a4-d35f-4b1a-a097-7392bcc1e091" path="/var/lib/kubelet/pods/de9788a4-d35f-4b1a-a097-7392bcc1e091/volumes" Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.436121 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-scripts" (OuterVolumeSpecName: "scripts") pod "bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2" (UID: "bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.448406 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2" (UID: "bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.458360 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2" (UID: "bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.514339 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.514364 4858 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.514375 4858 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.514385 4858 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.514394 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmfng\" (UniqueName: \"kubernetes.io/projected/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-kube-api-access-dmfng\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.514402 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.832399 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.835304 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.852226 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gc9g7" Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.852260 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gc9g7" event={"ID":"bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2","Type":"ContainerDied","Data":"c01ca6a9d60597dd5ffcce117585d2807079e6616bf402b7e216c06b95d04eab"} Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.852286 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c01ca6a9d60597dd5ffcce117585d2807079e6616bf402b7e216c06b95d04eab" Feb 18 00:52:17 crc kubenswrapper[4858]: I0218 00:52:17.853538 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:18 crc kubenswrapper[4858]: I0218 00:52:18.259820 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="c716bb3e-01b1-4bc7-a9a2-4604faf684f0" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 00:52:18 crc kubenswrapper[4858]: I0218 00:52:18.865133 4858 generic.go:334] "Generic (PLEG): container finished" podID="69e5abda-5efa-402f-b66c-320cf6ed1d99" containerID="40d4aaa45e6ba882acc9a8bd3e383cf4d4c226697170031564364f0f2f0a2c84" exitCode=0 Feb 18 00:52:18 crc kubenswrapper[4858]: I0218 00:52:18.865292 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"69e5abda-5efa-402f-b66c-320cf6ed1d99","Type":"ContainerDied","Data":"40d4aaa45e6ba882acc9a8bd3e383cf4d4c226697170031564364f0f2f0a2c84"} Feb 18 00:52:19 crc kubenswrapper[4858]: I0218 00:52:19.875038 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"69e5abda-5efa-402f-b66c-320cf6ed1d99","Type":"ContainerStarted","Data":"dd97a3fd5687b349265977f642b6c2ffaa5da99c6169d50e25e4b36a805a6446"} Feb 18 00:52:19 crc kubenswrapper[4858]: I0218 00:52:19.875657 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:19 crc kubenswrapper[4858]: I0218 00:52:19.876833 4858 generic.go:334] "Generic (PLEG): container finished" podID="a53fffdd-3f92-4632-8391-cc89792884a8" containerID="d2851682fe6c25612d81583590c07588ee0a134c25c499865ea34610e1d5d805" exitCode=0 Feb 18 00:52:19 crc kubenswrapper[4858]: I0218 00:52:19.876888 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a53fffdd-3f92-4632-8391-cc89792884a8","Type":"ContainerDied","Data":"d2851682fe6c25612d81583590c07588ee0a134c25c499865ea34610e1d5d805"} Feb 18 00:52:19 crc kubenswrapper[4858]: I0218 00:52:19.900934 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=56.983253802 podStartE2EDuration="1m10.900915123s" podCreationTimestamp="2026-02-18 00:51:09 +0000 UTC" firstStartedPulling="2026-02-18 00:51:27.826059656 +0000 UTC m=+1041.131896388" lastFinishedPulling="2026-02-18 00:51:41.743720967 +0000 UTC m=+1055.049557709" observedRunningTime="2026-02-18 00:52:19.900537194 +0000 UTC m=+1093.206373936" watchObservedRunningTime="2026-02-18 00:52:19.900915123 +0000 UTC m=+1093.206751855" Feb 18 00:52:20 crc kubenswrapper[4858]: I0218 00:52:20.300453 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 00:52:20 crc kubenswrapper[4858]: I0218 00:52:20.301098 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="675206cd-1619-4598-81a9-96b66d09ce88" containerName="thanos-sidecar" containerID="cri-o://cb9957228cdffa92328094e1aefbc5a143f684f8da6cebb43c8b318ccc79d4eb" gracePeriod=600 Feb 18 00:52:20 crc kubenswrapper[4858]: I0218 00:52:20.301089 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="675206cd-1619-4598-81a9-96b66d09ce88" containerName="config-reloader" containerID="cri-o://94ef0f112afbbed93175a754c3c0323cc4aaef125fe8d7efcfedd05c0e736362" gracePeriod=600 Feb 18 00:52:20 crc kubenswrapper[4858]: I0218 00:52:20.301315 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="675206cd-1619-4598-81a9-96b66d09ce88" containerName="prometheus" containerID="cri-o://75d839d7eadbb020fa34197877d9f2b23d2adc7b88c4576909e777858272040d" gracePeriod=600 Feb 18 00:52:20 crc kubenswrapper[4858]: I0218 00:52:20.888209 4858 generic.go:334] "Generic (PLEG): container finished" podID="675206cd-1619-4598-81a9-96b66d09ce88" containerID="cb9957228cdffa92328094e1aefbc5a143f684f8da6cebb43c8b318ccc79d4eb" exitCode=0 Feb 18 00:52:20 crc kubenswrapper[4858]: I0218 00:52:20.888237 4858 generic.go:334] "Generic (PLEG): container finished" podID="675206cd-1619-4598-81a9-96b66d09ce88" containerID="94ef0f112afbbed93175a754c3c0323cc4aaef125fe8d7efcfedd05c0e736362" exitCode=0 Feb 18 00:52:20 crc kubenswrapper[4858]: I0218 00:52:20.888244 4858 generic.go:334] "Generic (PLEG): container finished" podID="675206cd-1619-4598-81a9-96b66d09ce88" containerID="75d839d7eadbb020fa34197877d9f2b23d2adc7b88c4576909e777858272040d" exitCode=0 Feb 18 00:52:20 crc kubenswrapper[4858]: I0218 00:52:20.888276 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"675206cd-1619-4598-81a9-96b66d09ce88","Type":"ContainerDied","Data":"cb9957228cdffa92328094e1aefbc5a143f684f8da6cebb43c8b318ccc79d4eb"} Feb 18 00:52:20 crc kubenswrapper[4858]: I0218 00:52:20.888316 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"675206cd-1619-4598-81a9-96b66d09ce88","Type":"ContainerDied","Data":"94ef0f112afbbed93175a754c3c0323cc4aaef125fe8d7efcfedd05c0e736362"} Feb 18 00:52:20 crc kubenswrapper[4858]: I0218 00:52:20.888326 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"675206cd-1619-4598-81a9-96b66d09ce88","Type":"ContainerDied","Data":"75d839d7eadbb020fa34197877d9f2b23d2adc7b88c4576909e777858272040d"} Feb 18 00:52:20 crc kubenswrapper[4858]: I0218 00:52:20.890191 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a53fffdd-3f92-4632-8391-cc89792884a8","Type":"ContainerStarted","Data":"63a35b3de0e3525ea596ea6f96026f20572c707cc07407fb8bf5ffa177e1d463"} Feb 18 00:52:20 crc kubenswrapper[4858]: I0218 00:52:20.890803 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.078046 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-qn9qf" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.085037 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-qn9qf" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.109409 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=57.769496107 podStartE2EDuration="1m13.109395486s" podCreationTimestamp="2026-02-18 00:51:08 +0000 UTC" firstStartedPulling="2026-02-18 00:51:27.798382041 +0000 UTC m=+1041.104218763" lastFinishedPulling="2026-02-18 00:51:43.13828141 +0000 UTC m=+1056.444118142" observedRunningTime="2026-02-18 00:52:20.91694006 +0000 UTC m=+1094.222776792" watchObservedRunningTime="2026-02-18 00:52:21.109395486 +0000 UTC m=+1094.415232218" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.334450 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.356156 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-fvnsh-config-p84hr"] Feb 18 00:52:21 crc kubenswrapper[4858]: E0218 00:52:21.357638 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2" containerName="swift-ring-rebalance" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.357656 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2" containerName="swift-ring-rebalance" Feb 18 00:52:21 crc kubenswrapper[4858]: E0218 00:52:21.357670 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="675206cd-1619-4598-81a9-96b66d09ce88" containerName="init-config-reloader" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.357677 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="675206cd-1619-4598-81a9-96b66d09ce88" containerName="init-config-reloader" Feb 18 00:52:21 crc kubenswrapper[4858]: E0218 00:52:21.357685 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="675206cd-1619-4598-81a9-96b66d09ce88" containerName="config-reloader" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.357692 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="675206cd-1619-4598-81a9-96b66d09ce88" containerName="config-reloader" Feb 18 00:52:21 crc kubenswrapper[4858]: E0218 00:52:21.357707 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="675206cd-1619-4598-81a9-96b66d09ce88" containerName="prometheus" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.357713 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="675206cd-1619-4598-81a9-96b66d09ce88" containerName="prometheus" Feb 18 00:52:21 crc kubenswrapper[4858]: E0218 00:52:21.357725 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="675206cd-1619-4598-81a9-96b66d09ce88" containerName="thanos-sidecar" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.357731 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="675206cd-1619-4598-81a9-96b66d09ce88" containerName="thanos-sidecar" Feb 18 00:52:21 crc kubenswrapper[4858]: E0218 00:52:21.357744 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de9788a4-d35f-4b1a-a097-7392bcc1e091" containerName="mariadb-account-create-update" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.357750 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="de9788a4-d35f-4b1a-a097-7392bcc1e091" containerName="mariadb-account-create-update" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.357896 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="675206cd-1619-4598-81a9-96b66d09ce88" containerName="thanos-sidecar" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.357907 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="675206cd-1619-4598-81a9-96b66d09ce88" containerName="prometheus" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.357917 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="de9788a4-d35f-4b1a-a097-7392bcc1e091" containerName="mariadb-account-create-update" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.357931 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="675206cd-1619-4598-81a9-96b66d09ce88" containerName="config-reloader" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.357938 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2" containerName="swift-ring-rebalance" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.358506 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fvnsh-config-p84hr" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.361056 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.369904 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fvnsh-config-p84hr"] Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.502292 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/675206cd-1619-4598-81a9-96b66d09ce88-config-out\") pod \"675206cd-1619-4598-81a9-96b66d09ce88\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.502354 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/675206cd-1619-4598-81a9-96b66d09ce88-prometheus-metric-storage-rulefiles-0\") pod \"675206cd-1619-4598-81a9-96b66d09ce88\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.502388 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/675206cd-1619-4598-81a9-96b66d09ce88-tls-assets\") pod \"675206cd-1619-4598-81a9-96b66d09ce88\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.502615 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f31e845-073a-4f8c-8018-2bfd1403618b\") pod \"675206cd-1619-4598-81a9-96b66d09ce88\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.502637 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/675206cd-1619-4598-81a9-96b66d09ce88-config\") pod \"675206cd-1619-4598-81a9-96b66d09ce88\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.502685 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/675206cd-1619-4598-81a9-96b66d09ce88-prometheus-metric-storage-rulefiles-1\") pod \"675206cd-1619-4598-81a9-96b66d09ce88\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.502751 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/675206cd-1619-4598-81a9-96b66d09ce88-thanos-prometheus-http-client-file\") pod \"675206cd-1619-4598-81a9-96b66d09ce88\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.502780 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/675206cd-1619-4598-81a9-96b66d09ce88-prometheus-metric-storage-rulefiles-2\") pod \"675206cd-1619-4598-81a9-96b66d09ce88\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.502817 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzdxn\" (UniqueName: \"kubernetes.io/projected/675206cd-1619-4598-81a9-96b66d09ce88-kube-api-access-kzdxn\") pod \"675206cd-1619-4598-81a9-96b66d09ce88\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.502846 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/675206cd-1619-4598-81a9-96b66d09ce88-web-config\") pod \"675206cd-1619-4598-81a9-96b66d09ce88\" (UID: \"675206cd-1619-4598-81a9-96b66d09ce88\") " Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.503095 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a14b627-35dd-4a06-a030-a23d207650f7-scripts\") pod \"ovn-controller-fvnsh-config-p84hr\" (UID: \"2a14b627-35dd-4a06-a030-a23d207650f7\") " pod="openstack/ovn-controller-fvnsh-config-p84hr" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.503117 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a14b627-35dd-4a06-a030-a23d207650f7-var-run\") pod \"ovn-controller-fvnsh-config-p84hr\" (UID: \"2a14b627-35dd-4a06-a030-a23d207650f7\") " pod="openstack/ovn-controller-fvnsh-config-p84hr" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.503168 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2a14b627-35dd-4a06-a030-a23d207650f7-var-log-ovn\") pod \"ovn-controller-fvnsh-config-p84hr\" (UID: \"2a14b627-35dd-4a06-a030-a23d207650f7\") " pod="openstack/ovn-controller-fvnsh-config-p84hr" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.503203 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flsqs\" (UniqueName: \"kubernetes.io/projected/2a14b627-35dd-4a06-a030-a23d207650f7-kube-api-access-flsqs\") pod \"ovn-controller-fvnsh-config-p84hr\" (UID: \"2a14b627-35dd-4a06-a030-a23d207650f7\") " pod="openstack/ovn-controller-fvnsh-config-p84hr" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.503224 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2a14b627-35dd-4a06-a030-a23d207650f7-additional-scripts\") pod \"ovn-controller-fvnsh-config-p84hr\" (UID: \"2a14b627-35dd-4a06-a030-a23d207650f7\") " pod="openstack/ovn-controller-fvnsh-config-p84hr" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.503265 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/675206cd-1619-4598-81a9-96b66d09ce88-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "675206cd-1619-4598-81a9-96b66d09ce88" (UID: "675206cd-1619-4598-81a9-96b66d09ce88"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.503294 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/675206cd-1619-4598-81a9-96b66d09ce88-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "675206cd-1619-4598-81a9-96b66d09ce88" (UID: "675206cd-1619-4598-81a9-96b66d09ce88"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.503366 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/675206cd-1619-4598-81a9-96b66d09ce88-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "675206cd-1619-4598-81a9-96b66d09ce88" (UID: "675206cd-1619-4598-81a9-96b66d09ce88"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.503389 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a14b627-35dd-4a06-a030-a23d207650f7-var-run-ovn\") pod \"ovn-controller-fvnsh-config-p84hr\" (UID: \"2a14b627-35dd-4a06-a030-a23d207650f7\") " pod="openstack/ovn-controller-fvnsh-config-p84hr" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.503549 4858 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/675206cd-1619-4598-81a9-96b66d09ce88-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.503564 4858 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/675206cd-1619-4598-81a9-96b66d09ce88-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.503575 4858 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/675206cd-1619-4598-81a9-96b66d09ce88-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.509761 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/675206cd-1619-4598-81a9-96b66d09ce88-config-out" (OuterVolumeSpecName: "config-out") pod "675206cd-1619-4598-81a9-96b66d09ce88" (UID: "675206cd-1619-4598-81a9-96b66d09ce88"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.511517 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/675206cd-1619-4598-81a9-96b66d09ce88-config" (OuterVolumeSpecName: "config") pod "675206cd-1619-4598-81a9-96b66d09ce88" (UID: "675206cd-1619-4598-81a9-96b66d09ce88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.521410 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/675206cd-1619-4598-81a9-96b66d09ce88-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "675206cd-1619-4598-81a9-96b66d09ce88" (UID: "675206cd-1619-4598-81a9-96b66d09ce88"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.521699 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/675206cd-1619-4598-81a9-96b66d09ce88-kube-api-access-kzdxn" (OuterVolumeSpecName: "kube-api-access-kzdxn") pod "675206cd-1619-4598-81a9-96b66d09ce88" (UID: "675206cd-1619-4598-81a9-96b66d09ce88"). InnerVolumeSpecName "kube-api-access-kzdxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.526536 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f31e845-073a-4f8c-8018-2bfd1403618b" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "675206cd-1619-4598-81a9-96b66d09ce88" (UID: "675206cd-1619-4598-81a9-96b66d09ce88"). InnerVolumeSpecName "pvc-7f31e845-073a-4f8c-8018-2bfd1403618b". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.528879 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/675206cd-1619-4598-81a9-96b66d09ce88-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "675206cd-1619-4598-81a9-96b66d09ce88" (UID: "675206cd-1619-4598-81a9-96b66d09ce88"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.534611 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/675206cd-1619-4598-81a9-96b66d09ce88-web-config" (OuterVolumeSpecName: "web-config") pod "675206cd-1619-4598-81a9-96b66d09ce88" (UID: "675206cd-1619-4598-81a9-96b66d09ce88"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.605368 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a14b627-35dd-4a06-a030-a23d207650f7-var-run-ovn\") pod \"ovn-controller-fvnsh-config-p84hr\" (UID: \"2a14b627-35dd-4a06-a030-a23d207650f7\") " pod="openstack/ovn-controller-fvnsh-config-p84hr" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.605431 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a14b627-35dd-4a06-a030-a23d207650f7-scripts\") pod \"ovn-controller-fvnsh-config-p84hr\" (UID: \"2a14b627-35dd-4a06-a030-a23d207650f7\") " pod="openstack/ovn-controller-fvnsh-config-p84hr" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.605451 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a14b627-35dd-4a06-a030-a23d207650f7-var-run\") pod \"ovn-controller-fvnsh-config-p84hr\" (UID: \"2a14b627-35dd-4a06-a030-a23d207650f7\") " pod="openstack/ovn-controller-fvnsh-config-p84hr" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.605532 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2a14b627-35dd-4a06-a030-a23d207650f7-var-log-ovn\") pod \"ovn-controller-fvnsh-config-p84hr\" (UID: \"2a14b627-35dd-4a06-a030-a23d207650f7\") " pod="openstack/ovn-controller-fvnsh-config-p84hr" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.605574 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flsqs\" (UniqueName: \"kubernetes.io/projected/2a14b627-35dd-4a06-a030-a23d207650f7-kube-api-access-flsqs\") pod \"ovn-controller-fvnsh-config-p84hr\" (UID: \"2a14b627-35dd-4a06-a030-a23d207650f7\") " pod="openstack/ovn-controller-fvnsh-config-p84hr" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.605597 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2a14b627-35dd-4a06-a030-a23d207650f7-additional-scripts\") pod \"ovn-controller-fvnsh-config-p84hr\" (UID: \"2a14b627-35dd-4a06-a030-a23d207650f7\") " pod="openstack/ovn-controller-fvnsh-config-p84hr" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.605698 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-7f31e845-073a-4f8c-8018-2bfd1403618b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f31e845-073a-4f8c-8018-2bfd1403618b\") on node \"crc\" " Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.605711 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/675206cd-1619-4598-81a9-96b66d09ce88-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.605708 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a14b627-35dd-4a06-a030-a23d207650f7-var-run-ovn\") pod \"ovn-controller-fvnsh-config-p84hr\" (UID: \"2a14b627-35dd-4a06-a030-a23d207650f7\") " pod="openstack/ovn-controller-fvnsh-config-p84hr" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.605721 4858 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/675206cd-1619-4598-81a9-96b66d09ce88-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.605793 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzdxn\" (UniqueName: \"kubernetes.io/projected/675206cd-1619-4598-81a9-96b66d09ce88-kube-api-access-kzdxn\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.605809 4858 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/675206cd-1619-4598-81a9-96b66d09ce88-web-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.605825 4858 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/675206cd-1619-4598-81a9-96b66d09ce88-config-out\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.605834 4858 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/675206cd-1619-4598-81a9-96b66d09ce88-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.606141 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2a14b627-35dd-4a06-a030-a23d207650f7-var-log-ovn\") pod \"ovn-controller-fvnsh-config-p84hr\" (UID: \"2a14b627-35dd-4a06-a030-a23d207650f7\") " pod="openstack/ovn-controller-fvnsh-config-p84hr" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.606311 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a14b627-35dd-4a06-a030-a23d207650f7-var-run\") pod \"ovn-controller-fvnsh-config-p84hr\" (UID: \"2a14b627-35dd-4a06-a030-a23d207650f7\") " pod="openstack/ovn-controller-fvnsh-config-p84hr" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.606988 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2a14b627-35dd-4a06-a030-a23d207650f7-additional-scripts\") pod \"ovn-controller-fvnsh-config-p84hr\" (UID: \"2a14b627-35dd-4a06-a030-a23d207650f7\") " pod="openstack/ovn-controller-fvnsh-config-p84hr" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.607904 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a14b627-35dd-4a06-a030-a23d207650f7-scripts\") pod \"ovn-controller-fvnsh-config-p84hr\" (UID: \"2a14b627-35dd-4a06-a030-a23d207650f7\") " pod="openstack/ovn-controller-fvnsh-config-p84hr" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.622360 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flsqs\" (UniqueName: \"kubernetes.io/projected/2a14b627-35dd-4a06-a030-a23d207650f7-kube-api-access-flsqs\") pod \"ovn-controller-fvnsh-config-p84hr\" (UID: \"2a14b627-35dd-4a06-a030-a23d207650f7\") " pod="openstack/ovn-controller-fvnsh-config-p84hr" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.630545 4858 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.630741 4858 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-7f31e845-073a-4f8c-8018-2bfd1403618b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f31e845-073a-4f8c-8018-2bfd1403618b") on node "crc" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.674380 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fvnsh-config-p84hr" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.707820 4858 reconciler_common.go:293] "Volume detached for volume \"pvc-7f31e845-073a-4f8c-8018-2bfd1403618b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f31e845-073a-4f8c-8018-2bfd1403618b\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.938326 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"675206cd-1619-4598-81a9-96b66d09ce88","Type":"ContainerDied","Data":"718a2b165d3925b61222f071f999911d0afe01da0733d8b305f2ebc444a64677"} Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.939753 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.940454 4858 scope.go:117] "RemoveContainer" containerID="cb9957228cdffa92328094e1aefbc5a143f684f8da6cebb43c8b318ccc79d4eb" Feb 18 00:52:21 crc kubenswrapper[4858]: I0218 00:52:21.993713 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.006051 4858 scope.go:117] "RemoveContainer" containerID="94ef0f112afbbed93175a754c3c0323cc4aaef125fe8d7efcfedd05c0e736362" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.008266 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.017176 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.019192 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.029744 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.029869 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.029937 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.030008 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.030122 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.030882 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.031212 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-txzkj" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.032622 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.035247 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.035518 4858 scope.go:117] "RemoveContainer" containerID="75d839d7eadbb020fa34197877d9f2b23d2adc7b88c4576909e777858272040d" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.060867 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.109181 4858 scope.go:117] "RemoveContainer" containerID="dacce4e7cee3cf6c79792f28005fad891b8439866e078de90ac1c26888a44874" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.115550 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e61127a-3243-441c-a9e5-8eafb19aeac5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.115588 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7e61127a-3243-441c-a9e5-8eafb19aeac5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.115643 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e61127a-3243-441c-a9e5-8eafb19aeac5-config\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.115706 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpjkt\" (UniqueName: \"kubernetes.io/projected/7e61127a-3243-441c-a9e5-8eafb19aeac5-kube-api-access-jpjkt\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.115741 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7e61127a-3243-441c-a9e5-8eafb19aeac5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.115764 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7e61127a-3243-441c-a9e5-8eafb19aeac5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.115787 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7e61127a-3243-441c-a9e5-8eafb19aeac5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.115839 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7e61127a-3243-441c-a9e5-8eafb19aeac5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.115859 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7e61127a-3243-441c-a9e5-8eafb19aeac5-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.115888 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7f31e845-073a-4f8c-8018-2bfd1403618b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f31e845-073a-4f8c-8018-2bfd1403618b\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.115970 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7e61127a-3243-441c-a9e5-8eafb19aeac5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.116005 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7e61127a-3243-441c-a9e5-8eafb19aeac5-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.116044 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7e61127a-3243-441c-a9e5-8eafb19aeac5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.134584 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fvnsh-config-p84hr"] Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.211071 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-kf4zd"] Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.215049 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kf4zd" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.217408 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpjkt\" (UniqueName: \"kubernetes.io/projected/7e61127a-3243-441c-a9e5-8eafb19aeac5-kube-api-access-jpjkt\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.217458 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7e61127a-3243-441c-a9e5-8eafb19aeac5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.217485 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7e61127a-3243-441c-a9e5-8eafb19aeac5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.217517 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7e61127a-3243-441c-a9e5-8eafb19aeac5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.217561 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7e61127a-3243-441c-a9e5-8eafb19aeac5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.217578 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7e61127a-3243-441c-a9e5-8eafb19aeac5-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.217604 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7f31e845-073a-4f8c-8018-2bfd1403618b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f31e845-073a-4f8c-8018-2bfd1403618b\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.217650 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7e61127a-3243-441c-a9e5-8eafb19aeac5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.217670 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7e61127a-3243-441c-a9e5-8eafb19aeac5-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.217711 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7e61127a-3243-441c-a9e5-8eafb19aeac5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.217728 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.217775 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e61127a-3243-441c-a9e5-8eafb19aeac5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.217794 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7e61127a-3243-441c-a9e5-8eafb19aeac5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.217820 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e61127a-3243-441c-a9e5-8eafb19aeac5-config\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.218426 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7e61127a-3243-441c-a9e5-8eafb19aeac5-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.218558 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7e61127a-3243-441c-a9e5-8eafb19aeac5-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.219658 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7e61127a-3243-441c-a9e5-8eafb19aeac5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.222620 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7e61127a-3243-441c-a9e5-8eafb19aeac5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.223752 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e61127a-3243-441c-a9e5-8eafb19aeac5-config\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.224012 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kf4zd"] Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.233067 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7e61127a-3243-441c-a9e5-8eafb19aeac5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.244102 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.244142 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7f31e845-073a-4f8c-8018-2bfd1403618b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f31e845-073a-4f8c-8018-2bfd1403618b\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/44e27a725395d9ee006c04409605ca05c99678e3b59bf9a205b87c710aedbc27/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.244271 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7e61127a-3243-441c-a9e5-8eafb19aeac5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.244549 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e61127a-3243-441c-a9e5-8eafb19aeac5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.245821 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7e61127a-3243-441c-a9e5-8eafb19aeac5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.246642 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7e61127a-3243-441c-a9e5-8eafb19aeac5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.251744 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7e61127a-3243-441c-a9e5-8eafb19aeac5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.259232 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpjkt\" (UniqueName: \"kubernetes.io/projected/7e61127a-3243-441c-a9e5-8eafb19aeac5-kube-api-access-jpjkt\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.292525 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7f31e845-073a-4f8c-8018-2bfd1403618b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f31e845-073a-4f8c-8018-2bfd1403618b\") pod \"prometheus-metric-storage-0\" (UID: \"7e61127a-3243-441c-a9e5-8eafb19aeac5\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.319480 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8-operator-scripts\") pod \"root-account-create-update-kf4zd\" (UID: \"f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8\") " pod="openstack/root-account-create-update-kf4zd" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.319552 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmhsg\" (UniqueName: \"kubernetes.io/projected/f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8-kube-api-access-vmhsg\") pod \"root-account-create-update-kf4zd\" (UID: \"f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8\") " pod="openstack/root-account-create-update-kf4zd" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.350414 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.420763 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8-operator-scripts\") pod \"root-account-create-update-kf4zd\" (UID: \"f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8\") " pod="openstack/root-account-create-update-kf4zd" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.420821 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmhsg\" (UniqueName: \"kubernetes.io/projected/f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8-kube-api-access-vmhsg\") pod \"root-account-create-update-kf4zd\" (UID: \"f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8\") " pod="openstack/root-account-create-update-kf4zd" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.421969 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8-operator-scripts\") pod \"root-account-create-update-kf4zd\" (UID: \"f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8\") " pod="openstack/root-account-create-update-kf4zd" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.441373 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmhsg\" (UniqueName: \"kubernetes.io/projected/f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8-kube-api-access-vmhsg\") pod \"root-account-create-update-kf4zd\" (UID: \"f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8\") " pod="openstack/root-account-create-update-kf4zd" Feb 18 00:52:22 crc kubenswrapper[4858]: I0218 00:52:22.545136 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kf4zd" Feb 18 00:52:23 crc kubenswrapper[4858]: I0218 00:52:22.833717 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 00:52:23 crc kubenswrapper[4858]: W0218 00:52:22.840896 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e61127a_3243_441c_a9e5_8eafb19aeac5.slice/crio-14de6ca9fada95ba160620f669faf3f281a4a25e9b521d436ea5f849114a5413 WatchSource:0}: Error finding container 14de6ca9fada95ba160620f669faf3f281a4a25e9b521d436ea5f849114a5413: Status 404 returned error can't find the container with id 14de6ca9fada95ba160620f669faf3f281a4a25e9b521d436ea5f849114a5413 Feb 18 00:52:23 crc kubenswrapper[4858]: I0218 00:52:22.951437 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7e61127a-3243-441c-a9e5-8eafb19aeac5","Type":"ContainerStarted","Data":"14de6ca9fada95ba160620f669faf3f281a4a25e9b521d436ea5f849114a5413"} Feb 18 00:52:23 crc kubenswrapper[4858]: I0218 00:52:22.953109 4858 generic.go:334] "Generic (PLEG): container finished" podID="2a14b627-35dd-4a06-a030-a23d207650f7" containerID="460b60ca9ee5fc36532dc071dd525d56c92370185a619a80f8fa46d461065709" exitCode=0 Feb 18 00:52:23 crc kubenswrapper[4858]: I0218 00:52:22.953151 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fvnsh-config-p84hr" event={"ID":"2a14b627-35dd-4a06-a030-a23d207650f7","Type":"ContainerDied","Data":"460b60ca9ee5fc36532dc071dd525d56c92370185a619a80f8fa46d461065709"} Feb 18 00:52:23 crc kubenswrapper[4858]: I0218 00:52:22.953207 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fvnsh-config-p84hr" event={"ID":"2a14b627-35dd-4a06-a030-a23d207650f7","Type":"ContainerStarted","Data":"c5c73af20830f8918e62f03bf352605fe2928fb2af95c7a125fd46f1ea4afb4a"} Feb 18 00:52:23 crc kubenswrapper[4858]: I0218 00:52:23.431242 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="675206cd-1619-4598-81a9-96b66d09ce88" path="/var/lib/kubelet/pods/675206cd-1619-4598-81a9-96b66d09ce88/volumes" Feb 18 00:52:23 crc kubenswrapper[4858]: I0218 00:52:23.790043 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kf4zd"] Feb 18 00:52:25 crc kubenswrapper[4858]: I0218 00:52:25.265338 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:52:25 crc kubenswrapper[4858]: I0218 00:52:25.265730 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:52:25 crc kubenswrapper[4858]: I0218 00:52:25.981780 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7e61127a-3243-441c-a9e5-8eafb19aeac5","Type":"ContainerStarted","Data":"0e8b27b286e62012244879be64241ff0e92ef1b8b8bd07b703e6fc6575c3c817"} Feb 18 00:52:28 crc kubenswrapper[4858]: I0218 00:52:28.265765 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="c716bb3e-01b1-4bc7-a9a2-4604faf684f0" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 00:52:29 crc kubenswrapper[4858]: I0218 00:52:29.312623 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-etc-swift\") pod \"swift-storage-0\" (UID: \"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07\") " pod="openstack/swift-storage-0" Feb 18 00:52:29 crc kubenswrapper[4858]: I0218 00:52:29.338934 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0600ce0-ec0e-48b8-b22e-7f94ffd40c07-etc-swift\") pod \"swift-storage-0\" (UID: \"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07\") " pod="openstack/swift-storage-0" Feb 18 00:52:29 crc kubenswrapper[4858]: I0218 00:52:29.474773 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 00:52:30 crc kubenswrapper[4858]: I0218 00:52:30.341721 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 00:52:30 crc kubenswrapper[4858]: I0218 00:52:30.638100 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:30 crc kubenswrapper[4858]: W0218 00:52:30.672631 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3ed1a5a_7601_4e7b_94bd_b882d46ddbc8.slice/crio-fd9b10d6c27c80b552067c3f4e8be0381a5b700b91261415395915a03fe12229 WatchSource:0}: Error finding container fd9b10d6c27c80b552067c3f4e8be0381a5b700b91261415395915a03fe12229: Status 404 returned error can't find the container with id fd9b10d6c27c80b552067c3f4e8be0381a5b700b91261415395915a03fe12229 Feb 18 00:52:30 crc kubenswrapper[4858]: I0218 00:52:30.854995 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-create-sllmm"] Feb 18 00:52:30 crc kubenswrapper[4858]: I0218 00:52:30.856868 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-sllmm" Feb 18 00:52:30 crc kubenswrapper[4858]: I0218 00:52:30.870743 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fvnsh-config-p84hr" Feb 18 00:52:30 crc kubenswrapper[4858]: I0218 00:52:30.875924 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-sllmm"] Feb 18 00:52:30 crc kubenswrapper[4858]: I0218 00:52:30.953971 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1cf515e7-1bb4-4a22-baf6-932d935e26d5-operator-scripts\") pod \"cloudkitty-db-create-sllmm\" (UID: \"1cf515e7-1bb4-4a22-baf6-932d935e26d5\") " pod="openstack/cloudkitty-db-create-sllmm" Feb 18 00:52:30 crc kubenswrapper[4858]: I0218 00:52:30.954014 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmtn9\" (UniqueName: \"kubernetes.io/projected/1cf515e7-1bb4-4a22-baf6-932d935e26d5-kube-api-access-kmtn9\") pod \"cloudkitty-db-create-sllmm\" (UID: \"1cf515e7-1bb4-4a22-baf6-932d935e26d5\") " pod="openstack/cloudkitty-db-create-sllmm" Feb 18 00:52:30 crc kubenswrapper[4858]: I0218 00:52:30.960527 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-f8p9c"] Feb 18 00:52:30 crc kubenswrapper[4858]: E0218 00:52:30.960895 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a14b627-35dd-4a06-a030-a23d207650f7" containerName="ovn-config" Feb 18 00:52:30 crc kubenswrapper[4858]: I0218 00:52:30.960912 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a14b627-35dd-4a06-a030-a23d207650f7" containerName="ovn-config" Feb 18 00:52:30 crc kubenswrapper[4858]: I0218 00:52:30.961064 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a14b627-35dd-4a06-a030-a23d207650f7" containerName="ovn-config" Feb 18 00:52:30 crc kubenswrapper[4858]: I0218 00:52:30.961842 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f8p9c" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.057226 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a14b627-35dd-4a06-a030-a23d207650f7-scripts\") pod \"2a14b627-35dd-4a06-a030-a23d207650f7\" (UID: \"2a14b627-35dd-4a06-a030-a23d207650f7\") " Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.057505 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2a14b627-35dd-4a06-a030-a23d207650f7-additional-scripts\") pod \"2a14b627-35dd-4a06-a030-a23d207650f7\" (UID: \"2a14b627-35dd-4a06-a030-a23d207650f7\") " Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.057672 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flsqs\" (UniqueName: \"kubernetes.io/projected/2a14b627-35dd-4a06-a030-a23d207650f7-kube-api-access-flsqs\") pod \"2a14b627-35dd-4a06-a030-a23d207650f7\" (UID: \"2a14b627-35dd-4a06-a030-a23d207650f7\") " Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.057756 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2a14b627-35dd-4a06-a030-a23d207650f7-var-log-ovn\") pod \"2a14b627-35dd-4a06-a030-a23d207650f7\" (UID: \"2a14b627-35dd-4a06-a030-a23d207650f7\") " Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.057786 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a14b627-35dd-4a06-a030-a23d207650f7-var-run-ovn\") pod \"2a14b627-35dd-4a06-a030-a23d207650f7\" (UID: \"2a14b627-35dd-4a06-a030-a23d207650f7\") " Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.057919 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a14b627-35dd-4a06-a030-a23d207650f7-var-run\") pod \"2a14b627-35dd-4a06-a030-a23d207650f7\" (UID: \"2a14b627-35dd-4a06-a030-a23d207650f7\") " Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.058559 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb91d1f2-1b80-4082-a0a9-067ccadcc3a5-operator-scripts\") pod \"cinder-db-create-f8p9c\" (UID: \"eb91d1f2-1b80-4082-a0a9-067ccadcc3a5\") " pod="openstack/cinder-db-create-f8p9c" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.058562 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a14b627-35dd-4a06-a030-a23d207650f7-scripts" (OuterVolumeSpecName: "scripts") pod "2a14b627-35dd-4a06-a030-a23d207650f7" (UID: "2a14b627-35dd-4a06-a030-a23d207650f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.058678 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1cf515e7-1bb4-4a22-baf6-932d935e26d5-operator-scripts\") pod \"cloudkitty-db-create-sllmm\" (UID: \"1cf515e7-1bb4-4a22-baf6-932d935e26d5\") " pod="openstack/cloudkitty-db-create-sllmm" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.058719 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xlmb\" (UniqueName: \"kubernetes.io/projected/eb91d1f2-1b80-4082-a0a9-067ccadcc3a5-kube-api-access-6xlmb\") pod \"cinder-db-create-f8p9c\" (UID: \"eb91d1f2-1b80-4082-a0a9-067ccadcc3a5\") " pod="openstack/cinder-db-create-f8p9c" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.058748 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmtn9\" (UniqueName: \"kubernetes.io/projected/1cf515e7-1bb4-4a22-baf6-932d935e26d5-kube-api-access-kmtn9\") pod \"cloudkitty-db-create-sllmm\" (UID: \"1cf515e7-1bb4-4a22-baf6-932d935e26d5\") " pod="openstack/cloudkitty-db-create-sllmm" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.058807 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a14b627-35dd-4a06-a030-a23d207650f7-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2a14b627-35dd-4a06-a030-a23d207650f7" (UID: "2a14b627-35dd-4a06-a030-a23d207650f7"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.058883 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a14b627-35dd-4a06-a030-a23d207650f7-var-run" (OuterVolumeSpecName: "var-run") pod "2a14b627-35dd-4a06-a030-a23d207650f7" (UID: "2a14b627-35dd-4a06-a030-a23d207650f7"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.059165 4858 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a14b627-35dd-4a06-a030-a23d207650f7-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.059181 4858 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a14b627-35dd-4a06-a030-a23d207650f7-var-run\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.059192 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a14b627-35dd-4a06-a030-a23d207650f7-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.059219 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a14b627-35dd-4a06-a030-a23d207650f7-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2a14b627-35dd-4a06-a030-a23d207650f7" (UID: "2a14b627-35dd-4a06-a030-a23d207650f7"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.059458 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a14b627-35dd-4a06-a030-a23d207650f7-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "2a14b627-35dd-4a06-a030-a23d207650f7" (UID: "2a14b627-35dd-4a06-a030-a23d207650f7"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.059708 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1cf515e7-1bb4-4a22-baf6-932d935e26d5-operator-scripts\") pod \"cloudkitty-db-create-sllmm\" (UID: \"1cf515e7-1bb4-4a22-baf6-932d935e26d5\") " pod="openstack/cloudkitty-db-create-sllmm" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.061715 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-f8p9c"] Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.091089 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmtn9\" (UniqueName: \"kubernetes.io/projected/1cf515e7-1bb4-4a22-baf6-932d935e26d5-kube-api-access-kmtn9\") pod \"cloudkitty-db-create-sllmm\" (UID: \"1cf515e7-1bb4-4a22-baf6-932d935e26d5\") " pod="openstack/cloudkitty-db-create-sllmm" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.104950 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fvnsh-config-p84hr" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.105623 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c638-account-create-update-cw4s2"] Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.113799 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fvnsh-config-p84hr" event={"ID":"2a14b627-35dd-4a06-a030-a23d207650f7","Type":"ContainerDied","Data":"c5c73af20830f8918e62f03bf352605fe2928fb2af95c7a125fd46f1ea4afb4a"} Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.113842 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5c73af20830f8918e62f03bf352605fe2928fb2af95c7a125fd46f1ea4afb4a" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.113924 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c638-account-create-update-cw4s2" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.162632 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.163549 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a14b627-35dd-4a06-a030-a23d207650f7-kube-api-access-flsqs" (OuterVolumeSpecName: "kube-api-access-flsqs") pod "2a14b627-35dd-4a06-a030-a23d207650f7" (UID: "2a14b627-35dd-4a06-a030-a23d207650f7"). InnerVolumeSpecName "kube-api-access-flsqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.165505 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c638-account-create-update-cw4s2"] Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.168992 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xlmb\" (UniqueName: \"kubernetes.io/projected/eb91d1f2-1b80-4082-a0a9-067ccadcc3a5-kube-api-access-6xlmb\") pod \"cinder-db-create-f8p9c\" (UID: \"eb91d1f2-1b80-4082-a0a9-067ccadcc3a5\") " pod="openstack/cinder-db-create-f8p9c" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.169228 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb91d1f2-1b80-4082-a0a9-067ccadcc3a5-operator-scripts\") pod \"cinder-db-create-f8p9c\" (UID: \"eb91d1f2-1b80-4082-a0a9-067ccadcc3a5\") " pod="openstack/cinder-db-create-f8p9c" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.169311 4858 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2a14b627-35dd-4a06-a030-a23d207650f7-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.169322 4858 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2a14b627-35dd-4a06-a030-a23d207650f7-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.169330 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flsqs\" (UniqueName: \"kubernetes.io/projected/2a14b627-35dd-4a06-a030-a23d207650f7-kube-api-access-flsqs\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.175029 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb91d1f2-1b80-4082-a0a9-067ccadcc3a5-operator-scripts\") pod \"cinder-db-create-f8p9c\" (UID: \"eb91d1f2-1b80-4082-a0a9-067ccadcc3a5\") " pod="openstack/cinder-db-create-f8p9c" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.188070 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kf4zd" event={"ID":"f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8","Type":"ContainerStarted","Data":"fd9b10d6c27c80b552067c3f4e8be0381a5b700b91261415395915a03fe12229"} Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.235323 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xlmb\" (UniqueName: \"kubernetes.io/projected/eb91d1f2-1b80-4082-a0a9-067ccadcc3a5-kube-api-access-6xlmb\") pod \"cinder-db-create-f8p9c\" (UID: \"eb91d1f2-1b80-4082-a0a9-067ccadcc3a5\") " pod="openstack/cinder-db-create-f8p9c" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.277745 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcfae652-4782-4fce-85dd-1b25547d3189-operator-scripts\") pod \"cinder-c638-account-create-update-cw4s2\" (UID: \"bcfae652-4782-4fce-85dd-1b25547d3189\") " pod="openstack/cinder-c638-account-create-update-cw4s2" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.277891 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtn57\" (UniqueName: \"kubernetes.io/projected/bcfae652-4782-4fce-85dd-1b25547d3189-kube-api-access-dtn57\") pod \"cinder-c638-account-create-update-cw4s2\" (UID: \"bcfae652-4782-4fce-85dd-1b25547d3189\") " pod="openstack/cinder-c638-account-create-update-cw4s2" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.310931 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-sllmm" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.318269 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-bb8nr"] Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.319367 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bb8nr" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.334265 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.334279 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.334503 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x4lrd" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.334768 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.343426 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-bb8nr"] Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.361511 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f8p9c" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.381027 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtn57\" (UniqueName: \"kubernetes.io/projected/bcfae652-4782-4fce-85dd-1b25547d3189-kube-api-access-dtn57\") pod \"cinder-c638-account-create-update-cw4s2\" (UID: \"bcfae652-4782-4fce-85dd-1b25547d3189\") " pod="openstack/cinder-c638-account-create-update-cw4s2" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.381078 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcfae652-4782-4fce-85dd-1b25547d3189-operator-scripts\") pod \"cinder-c638-account-create-update-cw4s2\" (UID: \"bcfae652-4782-4fce-85dd-1b25547d3189\") " pod="openstack/cinder-c638-account-create-update-cw4s2" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.381763 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcfae652-4782-4fce-85dd-1b25547d3189-operator-scripts\") pod \"cinder-c638-account-create-update-cw4s2\" (UID: \"bcfae652-4782-4fce-85dd-1b25547d3189\") " pod="openstack/cinder-c638-account-create-update-cw4s2" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.445954 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-pk7r2"] Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.450281 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.450641 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pk7r2" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.452327 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtn57\" (UniqueName: \"kubernetes.io/projected/bcfae652-4782-4fce-85dd-1b25547d3189-kube-api-access-dtn57\") pod \"cinder-c638-account-create-update-cw4s2\" (UID: \"bcfae652-4782-4fce-85dd-1b25547d3189\") " pod="openstack/cinder-c638-account-create-update-cw4s2" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.459139 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-3e8e-account-create-update-sxgbt"] Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.460595 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3e8e-account-create-update-sxgbt" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.462984 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.483585 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-pk7r2"] Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.484747 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ab729d-962a-4c7b-8e72-ddf54dd2a69e-config-data\") pod \"keystone-db-sync-bb8nr\" (UID: \"03ab729d-962a-4c7b-8e72-ddf54dd2a69e\") " pod="openstack/keystone-db-sync-bb8nr" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.484849 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89qxq\" (UniqueName: \"kubernetes.io/projected/03ab729d-962a-4c7b-8e72-ddf54dd2a69e-kube-api-access-89qxq\") pod \"keystone-db-sync-bb8nr\" (UID: \"03ab729d-962a-4c7b-8e72-ddf54dd2a69e\") " pod="openstack/keystone-db-sync-bb8nr" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.484961 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ab729d-962a-4c7b-8e72-ddf54dd2a69e-combined-ca-bundle\") pod \"keystone-db-sync-bb8nr\" (UID: \"03ab729d-962a-4c7b-8e72-ddf54dd2a69e\") " pod="openstack/keystone-db-sync-bb8nr" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.526395 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3e8e-account-create-update-sxgbt"] Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.560271 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-z5j29"] Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.561645 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z5j29" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.569871 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c638-account-create-update-cw4s2" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.573656 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-z5j29"] Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.589567 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/831ee652-d7d7-4197-946d-ce5456dcc949-operator-scripts\") pod \"neutron-db-create-pk7r2\" (UID: \"831ee652-d7d7-4197-946d-ce5456dcc949\") " pod="openstack/neutron-db-create-pk7r2" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.589632 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ab729d-962a-4c7b-8e72-ddf54dd2a69e-combined-ca-bundle\") pod \"keystone-db-sync-bb8nr\" (UID: \"03ab729d-962a-4c7b-8e72-ddf54dd2a69e\") " pod="openstack/keystone-db-sync-bb8nr" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.589693 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ab729d-962a-4c7b-8e72-ddf54dd2a69e-config-data\") pod \"keystone-db-sync-bb8nr\" (UID: \"03ab729d-962a-4c7b-8e72-ddf54dd2a69e\") " pod="openstack/keystone-db-sync-bb8nr" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.589724 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac4a24af-9dad-4e95-a4c0-8296caee70ef-operator-scripts\") pod \"neutron-3e8e-account-create-update-sxgbt\" (UID: \"ac4a24af-9dad-4e95-a4c0-8296caee70ef\") " pod="openstack/neutron-3e8e-account-create-update-sxgbt" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.589749 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89qxq\" (UniqueName: \"kubernetes.io/projected/03ab729d-962a-4c7b-8e72-ddf54dd2a69e-kube-api-access-89qxq\") pod \"keystone-db-sync-bb8nr\" (UID: \"03ab729d-962a-4c7b-8e72-ddf54dd2a69e\") " pod="openstack/keystone-db-sync-bb8nr" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.589772 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw94m\" (UniqueName: \"kubernetes.io/projected/ac4a24af-9dad-4e95-a4c0-8296caee70ef-kube-api-access-tw94m\") pod \"neutron-3e8e-account-create-update-sxgbt\" (UID: \"ac4a24af-9dad-4e95-a4c0-8296caee70ef\") " pod="openstack/neutron-3e8e-account-create-update-sxgbt" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.589797 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq2wk\" (UniqueName: \"kubernetes.io/projected/831ee652-d7d7-4197-946d-ce5456dcc949-kube-api-access-xq2wk\") pod \"neutron-db-create-pk7r2\" (UID: \"831ee652-d7d7-4197-946d-ce5456dcc949\") " pod="openstack/neutron-db-create-pk7r2" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.596564 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ab729d-962a-4c7b-8e72-ddf54dd2a69e-combined-ca-bundle\") pod \"keystone-db-sync-bb8nr\" (UID: \"03ab729d-962a-4c7b-8e72-ddf54dd2a69e\") " pod="openstack/keystone-db-sync-bb8nr" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.598113 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ab729d-962a-4c7b-8e72-ddf54dd2a69e-config-data\") pod \"keystone-db-sync-bb8nr\" (UID: \"03ab729d-962a-4c7b-8e72-ddf54dd2a69e\") " pod="openstack/keystone-db-sync-bb8nr" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.621102 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89qxq\" (UniqueName: \"kubernetes.io/projected/03ab729d-962a-4c7b-8e72-ddf54dd2a69e-kube-api-access-89qxq\") pod \"keystone-db-sync-bb8nr\" (UID: \"03ab729d-962a-4c7b-8e72-ddf54dd2a69e\") " pod="openstack/keystone-db-sync-bb8nr" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.628545 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-dbbf-account-create-update-bvhsg"] Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.629779 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-dbbf-account-create-update-bvhsg" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.632081 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-db-secret" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.671557 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-dbbf-account-create-update-bvhsg"] Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.688725 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bb8nr" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.692179 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baf554d2-2987-45e7-9676-2139110e2781-operator-scripts\") pod \"barbican-db-create-z5j29\" (UID: \"baf554d2-2987-45e7-9676-2139110e2781\") " pod="openstack/barbican-db-create-z5j29" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.692230 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/831ee652-d7d7-4197-946d-ce5456dcc949-operator-scripts\") pod \"neutron-db-create-pk7r2\" (UID: \"831ee652-d7d7-4197-946d-ce5456dcc949\") " pod="openstack/neutron-db-create-pk7r2" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.692295 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac4a24af-9dad-4e95-a4c0-8296caee70ef-operator-scripts\") pod \"neutron-3e8e-account-create-update-sxgbt\" (UID: \"ac4a24af-9dad-4e95-a4c0-8296caee70ef\") " pod="openstack/neutron-3e8e-account-create-update-sxgbt" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.692313 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4wsz\" (UniqueName: \"kubernetes.io/projected/baf554d2-2987-45e7-9676-2139110e2781-kube-api-access-z4wsz\") pod \"barbican-db-create-z5j29\" (UID: \"baf554d2-2987-45e7-9676-2139110e2781\") " pod="openstack/barbican-db-create-z5j29" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.692343 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw94m\" (UniqueName: \"kubernetes.io/projected/ac4a24af-9dad-4e95-a4c0-8296caee70ef-kube-api-access-tw94m\") pod \"neutron-3e8e-account-create-update-sxgbt\" (UID: \"ac4a24af-9dad-4e95-a4c0-8296caee70ef\") " pod="openstack/neutron-3e8e-account-create-update-sxgbt" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.692368 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq2wk\" (UniqueName: \"kubernetes.io/projected/831ee652-d7d7-4197-946d-ce5456dcc949-kube-api-access-xq2wk\") pod \"neutron-db-create-pk7r2\" (UID: \"831ee652-d7d7-4197-946d-ce5456dcc949\") " pod="openstack/neutron-db-create-pk7r2" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.693572 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/831ee652-d7d7-4197-946d-ce5456dcc949-operator-scripts\") pod \"neutron-db-create-pk7r2\" (UID: \"831ee652-d7d7-4197-946d-ce5456dcc949\") " pod="openstack/neutron-db-create-pk7r2" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.694046 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac4a24af-9dad-4e95-a4c0-8296caee70ef-operator-scripts\") pod \"neutron-3e8e-account-create-update-sxgbt\" (UID: \"ac4a24af-9dad-4e95-a4c0-8296caee70ef\") " pod="openstack/neutron-3e8e-account-create-update-sxgbt" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.720334 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw94m\" (UniqueName: \"kubernetes.io/projected/ac4a24af-9dad-4e95-a4c0-8296caee70ef-kube-api-access-tw94m\") pod \"neutron-3e8e-account-create-update-sxgbt\" (UID: \"ac4a24af-9dad-4e95-a4c0-8296caee70ef\") " pod="openstack/neutron-3e8e-account-create-update-sxgbt" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.720628 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-0196-account-create-update-6d2sl"] Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.724213 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0196-account-create-update-6d2sl" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.724667 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq2wk\" (UniqueName: \"kubernetes.io/projected/831ee652-d7d7-4197-946d-ce5456dcc949-kube-api-access-xq2wk\") pod \"neutron-db-create-pk7r2\" (UID: \"831ee652-d7d7-4197-946d-ce5456dcc949\") " pod="openstack/neutron-db-create-pk7r2" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.726749 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.744690 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0196-account-create-update-6d2sl"] Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.794216 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2xdx\" (UniqueName: \"kubernetes.io/projected/d579ea77-2807-419f-b4f4-558b7cc1a09b-kube-api-access-x2xdx\") pod \"barbican-0196-account-create-update-6d2sl\" (UID: \"d579ea77-2807-419f-b4f4-558b7cc1a09b\") " pod="openstack/barbican-0196-account-create-update-6d2sl" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.794293 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d579ea77-2807-419f-b4f4-558b7cc1a09b-operator-scripts\") pod \"barbican-0196-account-create-update-6d2sl\" (UID: \"d579ea77-2807-419f-b4f4-558b7cc1a09b\") " pod="openstack/barbican-0196-account-create-update-6d2sl" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.794336 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ed5ad30-acfc-4cff-8dfb-a0eb62046780-operator-scripts\") pod \"cloudkitty-dbbf-account-create-update-bvhsg\" (UID: \"0ed5ad30-acfc-4cff-8dfb-a0eb62046780\") " pod="openstack/cloudkitty-dbbf-account-create-update-bvhsg" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.794359 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baf554d2-2987-45e7-9676-2139110e2781-operator-scripts\") pod \"barbican-db-create-z5j29\" (UID: \"baf554d2-2987-45e7-9676-2139110e2781\") " pod="openstack/barbican-db-create-z5j29" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.794388 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x65dv\" (UniqueName: \"kubernetes.io/projected/0ed5ad30-acfc-4cff-8dfb-a0eb62046780-kube-api-access-x65dv\") pod \"cloudkitty-dbbf-account-create-update-bvhsg\" (UID: \"0ed5ad30-acfc-4cff-8dfb-a0eb62046780\") " pod="openstack/cloudkitty-dbbf-account-create-update-bvhsg" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.794458 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4wsz\" (UniqueName: \"kubernetes.io/projected/baf554d2-2987-45e7-9676-2139110e2781-kube-api-access-z4wsz\") pod \"barbican-db-create-z5j29\" (UID: \"baf554d2-2987-45e7-9676-2139110e2781\") " pod="openstack/barbican-db-create-z5j29" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.795346 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baf554d2-2987-45e7-9676-2139110e2781-operator-scripts\") pod \"barbican-db-create-z5j29\" (UID: \"baf554d2-2987-45e7-9676-2139110e2781\") " pod="openstack/barbican-db-create-z5j29" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.813041 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pk7r2" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.828585 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4wsz\" (UniqueName: \"kubernetes.io/projected/baf554d2-2987-45e7-9676-2139110e2781-kube-api-access-z4wsz\") pod \"barbican-db-create-z5j29\" (UID: \"baf554d2-2987-45e7-9676-2139110e2781\") " pod="openstack/barbican-db-create-z5j29" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.849285 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3e8e-account-create-update-sxgbt" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.896571 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z5j29" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.897132 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2xdx\" (UniqueName: \"kubernetes.io/projected/d579ea77-2807-419f-b4f4-558b7cc1a09b-kube-api-access-x2xdx\") pod \"barbican-0196-account-create-update-6d2sl\" (UID: \"d579ea77-2807-419f-b4f4-558b7cc1a09b\") " pod="openstack/barbican-0196-account-create-update-6d2sl" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.897197 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d579ea77-2807-419f-b4f4-558b7cc1a09b-operator-scripts\") pod \"barbican-0196-account-create-update-6d2sl\" (UID: \"d579ea77-2807-419f-b4f4-558b7cc1a09b\") " pod="openstack/barbican-0196-account-create-update-6d2sl" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.897239 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ed5ad30-acfc-4cff-8dfb-a0eb62046780-operator-scripts\") pod \"cloudkitty-dbbf-account-create-update-bvhsg\" (UID: \"0ed5ad30-acfc-4cff-8dfb-a0eb62046780\") " pod="openstack/cloudkitty-dbbf-account-create-update-bvhsg" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.897273 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x65dv\" (UniqueName: \"kubernetes.io/projected/0ed5ad30-acfc-4cff-8dfb-a0eb62046780-kube-api-access-x65dv\") pod \"cloudkitty-dbbf-account-create-update-bvhsg\" (UID: \"0ed5ad30-acfc-4cff-8dfb-a0eb62046780\") " pod="openstack/cloudkitty-dbbf-account-create-update-bvhsg" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.898218 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d579ea77-2807-419f-b4f4-558b7cc1a09b-operator-scripts\") pod \"barbican-0196-account-create-update-6d2sl\" (UID: \"d579ea77-2807-419f-b4f4-558b7cc1a09b\") " pod="openstack/barbican-0196-account-create-update-6d2sl" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.898458 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ed5ad30-acfc-4cff-8dfb-a0eb62046780-operator-scripts\") pod \"cloudkitty-dbbf-account-create-update-bvhsg\" (UID: \"0ed5ad30-acfc-4cff-8dfb-a0eb62046780\") " pod="openstack/cloudkitty-dbbf-account-create-update-bvhsg" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.917241 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2xdx\" (UniqueName: \"kubernetes.io/projected/d579ea77-2807-419f-b4f4-558b7cc1a09b-kube-api-access-x2xdx\") pod \"barbican-0196-account-create-update-6d2sl\" (UID: \"d579ea77-2807-419f-b4f4-558b7cc1a09b\") " pod="openstack/barbican-0196-account-create-update-6d2sl" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.917276 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x65dv\" (UniqueName: \"kubernetes.io/projected/0ed5ad30-acfc-4cff-8dfb-a0eb62046780-kube-api-access-x65dv\") pod \"cloudkitty-dbbf-account-create-update-bvhsg\" (UID: \"0ed5ad30-acfc-4cff-8dfb-a0eb62046780\") " pod="openstack/cloudkitty-dbbf-account-create-update-bvhsg" Feb 18 00:52:31 crc kubenswrapper[4858]: I0218 00:52:31.969655 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-dbbf-account-create-update-bvhsg" Feb 18 00:52:32 crc kubenswrapper[4858]: I0218 00:52:32.052937 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 00:52:32 crc kubenswrapper[4858]: I0218 00:52:32.097960 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-fvnsh-config-p84hr"] Feb 18 00:52:32 crc kubenswrapper[4858]: I0218 00:52:32.105736 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-fvnsh-config-p84hr"] Feb 18 00:52:32 crc kubenswrapper[4858]: I0218 00:52:32.109968 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0196-account-create-update-6d2sl" Feb 18 00:52:32 crc kubenswrapper[4858]: I0218 00:52:32.115897 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-sllmm"] Feb 18 00:52:32 crc kubenswrapper[4858]: W0218 00:52:32.154332 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0600ce0_ec0e_48b8_b22e_7f94ffd40c07.slice/crio-828bb844e878b520c01faad033f494d85cb6f883932171fe1710bdd9cac7e910 WatchSource:0}: Error finding container 828bb844e878b520c01faad033f494d85cb6f883932171fe1710bdd9cac7e910: Status 404 returned error can't find the container with id 828bb844e878b520c01faad033f494d85cb6f883932171fe1710bdd9cac7e910 Feb 18 00:52:32 crc kubenswrapper[4858]: I0218 00:52:32.222025 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07","Type":"ContainerStarted","Data":"828bb844e878b520c01faad033f494d85cb6f883932171fe1710bdd9cac7e910"} Feb 18 00:52:32 crc kubenswrapper[4858]: I0218 00:52:32.224112 4858 generic.go:334] "Generic (PLEG): container finished" podID="f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8" containerID="46ee678916ae87f94add437390249c8fb6de6e8496a5570841c4409cdaa3d8cf" exitCode=0 Feb 18 00:52:32 crc kubenswrapper[4858]: I0218 00:52:32.224161 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kf4zd" event={"ID":"f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8","Type":"ContainerDied","Data":"46ee678916ae87f94add437390249c8fb6de6e8496a5570841c4409cdaa3d8cf"} Feb 18 00:52:32 crc kubenswrapper[4858]: I0218 00:52:32.228255 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-sllmm" event={"ID":"1cf515e7-1bb4-4a22-baf6-932d935e26d5","Type":"ContainerStarted","Data":"b3c50140ccc37d7f5be7fc34218e0bb320dd4867427077892ae383c50924361f"} Feb 18 00:52:32 crc kubenswrapper[4858]: I0218 00:52:32.229779 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gjnp4" event={"ID":"e80e88e1-21eb-46ff-9ee5-d22d3d589ecd","Type":"ContainerStarted","Data":"ce0c9cc1c5da391638527f8c880d54e1375c55a46e782617bf2c63d319921a9c"} Feb 18 00:52:32 crc kubenswrapper[4858]: I0218 00:52:32.231910 4858 generic.go:334] "Generic (PLEG): container finished" podID="7e61127a-3243-441c-a9e5-8eafb19aeac5" containerID="0e8b27b286e62012244879be64241ff0e92ef1b8b8bd07b703e6fc6575c3c817" exitCode=0 Feb 18 00:52:32 crc kubenswrapper[4858]: I0218 00:52:32.231940 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7e61127a-3243-441c-a9e5-8eafb19aeac5","Type":"ContainerDied","Data":"0e8b27b286e62012244879be64241ff0e92ef1b8b8bd07b703e6fc6575c3c817"} Feb 18 00:52:32 crc kubenswrapper[4858]: I0218 00:52:32.375480 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-gjnp4" podStartSLOduration=3.29680141 podStartE2EDuration="18.375461088s" podCreationTimestamp="2026-02-18 00:52:14 +0000 UTC" firstStartedPulling="2026-02-18 00:52:15.778262492 +0000 UTC m=+1089.084099224" lastFinishedPulling="2026-02-18 00:52:30.85692217 +0000 UTC m=+1104.162758902" observedRunningTime="2026-02-18 00:52:32.3136261 +0000 UTC m=+1105.619462832" watchObservedRunningTime="2026-02-18 00:52:32.375461088 +0000 UTC m=+1105.681297820" Feb 18 00:52:32 crc kubenswrapper[4858]: I0218 00:52:32.433397 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-bb8nr"] Feb 18 00:52:32 crc kubenswrapper[4858]: I0218 00:52:32.449103 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-f8p9c"] Feb 18 00:52:32 crc kubenswrapper[4858]: I0218 00:52:32.674826 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c638-account-create-update-cw4s2"] Feb 18 00:52:32 crc kubenswrapper[4858]: I0218 00:52:32.819060 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-pk7r2"] Feb 18 00:52:32 crc kubenswrapper[4858]: I0218 00:52:32.832053 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3e8e-account-create-update-sxgbt"] Feb 18 00:52:32 crc kubenswrapper[4858]: I0218 00:52:32.864095 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-dbbf-account-create-update-bvhsg"] Feb 18 00:52:32 crc kubenswrapper[4858]: I0218 00:52:32.944953 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0196-account-create-update-6d2sl"] Feb 18 00:52:32 crc kubenswrapper[4858]: I0218 00:52:32.958834 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-z5j29"] Feb 18 00:52:33 crc kubenswrapper[4858]: I0218 00:52:33.245668 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bb8nr" event={"ID":"03ab729d-962a-4c7b-8e72-ddf54dd2a69e","Type":"ContainerStarted","Data":"8b2bffb8a5957ea50280765a4ad6d9bd8cf8347fd9e7d73aeb25c58a67fee7fd"} Feb 18 00:52:33 crc kubenswrapper[4858]: I0218 00:52:33.258761 4858 generic.go:334] "Generic (PLEG): container finished" podID="eb91d1f2-1b80-4082-a0a9-067ccadcc3a5" containerID="3a4127e7fda8b54bab13483b701290c9efc846b2f5e825bdfd95b8c32e5cb226" exitCode=0 Feb 18 00:52:33 crc kubenswrapper[4858]: I0218 00:52:33.258847 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-f8p9c" event={"ID":"eb91d1f2-1b80-4082-a0a9-067ccadcc3a5","Type":"ContainerDied","Data":"3a4127e7fda8b54bab13483b701290c9efc846b2f5e825bdfd95b8c32e5cb226"} Feb 18 00:52:33 crc kubenswrapper[4858]: I0218 00:52:33.258882 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-f8p9c" event={"ID":"eb91d1f2-1b80-4082-a0a9-067ccadcc3a5","Type":"ContainerStarted","Data":"86a45467c427802da5e438922b05e0d0a3e67c441a51187b879f11a95248806f"} Feb 18 00:52:33 crc kubenswrapper[4858]: I0218 00:52:33.270341 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-z5j29" event={"ID":"baf554d2-2987-45e7-9676-2139110e2781","Type":"ContainerStarted","Data":"61913bcb3e85eb95cea418f82559fcb765f2055aae35df84fd167dd8dd3ab619"} Feb 18 00:52:33 crc kubenswrapper[4858]: I0218 00:52:33.270383 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-z5j29" event={"ID":"baf554d2-2987-45e7-9676-2139110e2781","Type":"ContainerStarted","Data":"818e66890b3d5805b26d7de8ef0e8bf2dcabf0553029ee96f929599a379acfd3"} Feb 18 00:52:33 crc kubenswrapper[4858]: I0218 00:52:33.276696 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c638-account-create-update-cw4s2" event={"ID":"bcfae652-4782-4fce-85dd-1b25547d3189","Type":"ContainerStarted","Data":"21bef429d3aa782c1b7fe218abf2149149b37bf3fc6ffc07bcd004f77161f0e9"} Feb 18 00:52:33 crc kubenswrapper[4858]: I0218 00:52:33.276746 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c638-account-create-update-cw4s2" event={"ID":"bcfae652-4782-4fce-85dd-1b25547d3189","Type":"ContainerStarted","Data":"0ffb615a367a6955ee35513db78b82e6160da827122508068c3ad2cefdef6ef8"} Feb 18 00:52:33 crc kubenswrapper[4858]: I0218 00:52:33.284728 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3e8e-account-create-update-sxgbt" event={"ID":"ac4a24af-9dad-4e95-a4c0-8296caee70ef","Type":"ContainerStarted","Data":"6bab1ac8463a6b3e79b00f515110e61c38d6f50857706ff48cded01d65614990"} Feb 18 00:52:33 crc kubenswrapper[4858]: I0218 00:52:33.284773 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3e8e-account-create-update-sxgbt" event={"ID":"ac4a24af-9dad-4e95-a4c0-8296caee70ef","Type":"ContainerStarted","Data":"52abe323eb64e067e7c4454422338eb603e65b9c2f33182dcb700e6ecff2f77f"} Feb 18 00:52:33 crc kubenswrapper[4858]: I0218 00:52:33.290124 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7e61127a-3243-441c-a9e5-8eafb19aeac5","Type":"ContainerStarted","Data":"cfd148fa9c0ea6783cf5575c337c52f343c4786f5e7349fee483ae5169a4c341"} Feb 18 00:52:33 crc kubenswrapper[4858]: I0218 00:52:33.298318 4858 generic.go:334] "Generic (PLEG): container finished" podID="1cf515e7-1bb4-4a22-baf6-932d935e26d5" containerID="6623e91d53940bbbdcfb297b39182d5e4ab6fa33466f9b27fb4de23b85e0701b" exitCode=0 Feb 18 00:52:33 crc kubenswrapper[4858]: I0218 00:52:33.298368 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-sllmm" event={"ID":"1cf515e7-1bb4-4a22-baf6-932d935e26d5","Type":"ContainerDied","Data":"6623e91d53940bbbdcfb297b39182d5e4ab6fa33466f9b27fb4de23b85e0701b"} Feb 18 00:52:33 crc kubenswrapper[4858]: I0218 00:52:33.306524 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0196-account-create-update-6d2sl" event={"ID":"d579ea77-2807-419f-b4f4-558b7cc1a09b","Type":"ContainerStarted","Data":"b06ad148cb879493ad1534c40ae9adb200c7deb3aee44152a1ffd73176d32c79"} Feb 18 00:52:33 crc kubenswrapper[4858]: I0218 00:52:33.320723 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-dbbf-account-create-update-bvhsg" event={"ID":"0ed5ad30-acfc-4cff-8dfb-a0eb62046780","Type":"ContainerStarted","Data":"847a2f6f6b35a87fc65170fce776576c1dd26bdacac74dae4ae71b681e3ce3a7"} Feb 18 00:52:33 crc kubenswrapper[4858]: I0218 00:52:33.331363 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-pk7r2" event={"ID":"831ee652-d7d7-4197-946d-ce5456dcc949","Type":"ContainerStarted","Data":"66e2a7143ec269b65af6a54c5d4fcc131603cd4f471e9e79348c093e1c017834"} Feb 18 00:52:33 crc kubenswrapper[4858]: I0218 00:52:33.331400 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-pk7r2" event={"ID":"831ee652-d7d7-4197-946d-ce5456dcc949","Type":"ContainerStarted","Data":"011b69bfcfc99b27f41fc7fe44d0b2c8b10fef4547823fe9f0b33bb8285512ca"} Feb 18 00:52:33 crc kubenswrapper[4858]: I0218 00:52:33.335236 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-c638-account-create-update-cw4s2" podStartSLOduration=2.33522375 podStartE2EDuration="2.33522375s" podCreationTimestamp="2026-02-18 00:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:52:33.324209033 +0000 UTC m=+1106.630045765" watchObservedRunningTime="2026-02-18 00:52:33.33522375 +0000 UTC m=+1106.641060482" Feb 18 00:52:33 crc kubenswrapper[4858]: I0218 00:52:33.391044 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-3e8e-account-create-update-sxgbt" podStartSLOduration=2.391025922 podStartE2EDuration="2.391025922s" podCreationTimestamp="2026-02-18 00:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:52:33.390965391 +0000 UTC m=+1106.696802113" watchObservedRunningTime="2026-02-18 00:52:33.391025922 +0000 UTC m=+1106.696862654" Feb 18 00:52:33 crc kubenswrapper[4858]: I0218 00:52:33.393857 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-z5j29" podStartSLOduration=2.39385113 podStartE2EDuration="2.39385113s" podCreationTimestamp="2026-02-18 00:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:52:33.356854824 +0000 UTC m=+1106.662691556" watchObservedRunningTime="2026-02-18 00:52:33.39385113 +0000 UTC m=+1106.699687862" Feb 18 00:52:33 crc kubenswrapper[4858]: I0218 00:52:33.429217 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-dbbf-account-create-update-bvhsg" podStartSLOduration=2.429201447 podStartE2EDuration="2.429201447s" podCreationTimestamp="2026-02-18 00:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:52:33.428204962 +0000 UTC m=+1106.734041694" watchObservedRunningTime="2026-02-18 00:52:33.429201447 +0000 UTC m=+1106.735038179" Feb 18 00:52:33 crc kubenswrapper[4858]: I0218 00:52:33.439719 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a14b627-35dd-4a06-a030-a23d207650f7" path="/var/lib/kubelet/pods/2a14b627-35dd-4a06-a030-a23d207650f7/volumes" Feb 18 00:52:33 crc kubenswrapper[4858]: I0218 00:52:33.461578 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-pk7r2" podStartSLOduration=2.4615622999999998 podStartE2EDuration="2.4615623s" podCreationTimestamp="2026-02-18 00:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:52:33.459714076 +0000 UTC m=+1106.765550808" watchObservedRunningTime="2026-02-18 00:52:33.4615623 +0000 UTC m=+1106.767399032" Feb 18 00:52:33 crc kubenswrapper[4858]: I0218 00:52:33.916853 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kf4zd" Feb 18 00:52:34 crc kubenswrapper[4858]: I0218 00:52:34.087129 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmhsg\" (UniqueName: \"kubernetes.io/projected/f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8-kube-api-access-vmhsg\") pod \"f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8\" (UID: \"f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8\") " Feb 18 00:52:34 crc kubenswrapper[4858]: I0218 00:52:34.087596 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8-operator-scripts\") pod \"f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8\" (UID: \"f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8\") " Feb 18 00:52:34 crc kubenswrapper[4858]: I0218 00:52:34.088029 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8" (UID: "f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:34 crc kubenswrapper[4858]: I0218 00:52:34.088530 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:34 crc kubenswrapper[4858]: I0218 00:52:34.199826 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8-kube-api-access-vmhsg" (OuterVolumeSpecName: "kube-api-access-vmhsg") pod "f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8" (UID: "f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8"). InnerVolumeSpecName "kube-api-access-vmhsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:34 crc kubenswrapper[4858]: E0218 00:52:34.270681 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ed5ad30_acfc_4cff_8dfb_a0eb62046780.slice/crio-conmon-e8105054f4e99b0795a9cfd27d2524ebd13021b21d5843c6a1508d6d30ff6e06.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac4a24af_9dad_4e95_a4c0_8296caee70ef.slice/crio-conmon-6bab1ac8463a6b3e79b00f515110e61c38d6f50857706ff48cded01d65614990.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ed5ad30_acfc_4cff_8dfb_a0eb62046780.slice/crio-e8105054f4e99b0795a9cfd27d2524ebd13021b21d5843c6a1508d6d30ff6e06.scope\": RecentStats: unable to find data in memory cache]" Feb 18 00:52:34 crc kubenswrapper[4858]: I0218 00:52:34.291698 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmhsg\" (UniqueName: \"kubernetes.io/projected/f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8-kube-api-access-vmhsg\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:34 crc kubenswrapper[4858]: I0218 00:52:34.341585 4858 generic.go:334] "Generic (PLEG): container finished" podID="bcfae652-4782-4fce-85dd-1b25547d3189" containerID="21bef429d3aa782c1b7fe218abf2149149b37bf3fc6ffc07bcd004f77161f0e9" exitCode=0 Feb 18 00:52:34 crc kubenswrapper[4858]: I0218 00:52:34.341658 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c638-account-create-update-cw4s2" event={"ID":"bcfae652-4782-4fce-85dd-1b25547d3189","Type":"ContainerDied","Data":"21bef429d3aa782c1b7fe218abf2149149b37bf3fc6ffc07bcd004f77161f0e9"} Feb 18 00:52:34 crc kubenswrapper[4858]: I0218 00:52:34.343594 4858 generic.go:334] "Generic (PLEG): container finished" podID="ac4a24af-9dad-4e95-a4c0-8296caee70ef" containerID="6bab1ac8463a6b3e79b00f515110e61c38d6f50857706ff48cded01d65614990" exitCode=0 Feb 18 00:52:34 crc kubenswrapper[4858]: I0218 00:52:34.343633 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3e8e-account-create-update-sxgbt" event={"ID":"ac4a24af-9dad-4e95-a4c0-8296caee70ef","Type":"ContainerDied","Data":"6bab1ac8463a6b3e79b00f515110e61c38d6f50857706ff48cded01d65614990"} Feb 18 00:52:34 crc kubenswrapper[4858]: I0218 00:52:34.345952 4858 generic.go:334] "Generic (PLEG): container finished" podID="d579ea77-2807-419f-b4f4-558b7cc1a09b" containerID="3910f249d783b5f7cb82afdfd5bf2e171dd09be9d92a70591b56a3f8577cea07" exitCode=0 Feb 18 00:52:34 crc kubenswrapper[4858]: I0218 00:52:34.346002 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0196-account-create-update-6d2sl" event={"ID":"d579ea77-2807-419f-b4f4-558b7cc1a09b","Type":"ContainerDied","Data":"3910f249d783b5f7cb82afdfd5bf2e171dd09be9d92a70591b56a3f8577cea07"} Feb 18 00:52:34 crc kubenswrapper[4858]: I0218 00:52:34.347594 4858 generic.go:334] "Generic (PLEG): container finished" podID="0ed5ad30-acfc-4cff-8dfb-a0eb62046780" containerID="e8105054f4e99b0795a9cfd27d2524ebd13021b21d5843c6a1508d6d30ff6e06" exitCode=0 Feb 18 00:52:34 crc kubenswrapper[4858]: I0218 00:52:34.347621 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-dbbf-account-create-update-bvhsg" event={"ID":"0ed5ad30-acfc-4cff-8dfb-a0eb62046780","Type":"ContainerDied","Data":"e8105054f4e99b0795a9cfd27d2524ebd13021b21d5843c6a1508d6d30ff6e06"} Feb 18 00:52:34 crc kubenswrapper[4858]: I0218 00:52:34.348997 4858 generic.go:334] "Generic (PLEG): container finished" podID="831ee652-d7d7-4197-946d-ce5456dcc949" containerID="66e2a7143ec269b65af6a54c5d4fcc131603cd4f471e9e79348c093e1c017834" exitCode=0 Feb 18 00:52:34 crc kubenswrapper[4858]: I0218 00:52:34.349051 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-pk7r2" event={"ID":"831ee652-d7d7-4197-946d-ce5456dcc949","Type":"ContainerDied","Data":"66e2a7143ec269b65af6a54c5d4fcc131603cd4f471e9e79348c093e1c017834"} Feb 18 00:52:34 crc kubenswrapper[4858]: I0218 00:52:34.351413 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kf4zd" event={"ID":"f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8","Type":"ContainerDied","Data":"fd9b10d6c27c80b552067c3f4e8be0381a5b700b91261415395915a03fe12229"} Feb 18 00:52:34 crc kubenswrapper[4858]: I0218 00:52:34.351422 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kf4zd" Feb 18 00:52:34 crc kubenswrapper[4858]: I0218 00:52:34.351432 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd9b10d6c27c80b552067c3f4e8be0381a5b700b91261415395915a03fe12229" Feb 18 00:52:34 crc kubenswrapper[4858]: I0218 00:52:34.363067 4858 generic.go:334] "Generic (PLEG): container finished" podID="baf554d2-2987-45e7-9676-2139110e2781" containerID="61913bcb3e85eb95cea418f82559fcb765f2055aae35df84fd167dd8dd3ab619" exitCode=0 Feb 18 00:52:34 crc kubenswrapper[4858]: I0218 00:52:34.363254 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-z5j29" event={"ID":"baf554d2-2987-45e7-9676-2139110e2781","Type":"ContainerDied","Data":"61913bcb3e85eb95cea418f82559fcb765f2055aae35df84fd167dd8dd3ab619"} Feb 18 00:52:34 crc kubenswrapper[4858]: I0218 00:52:34.845763 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f8p9c" Feb 18 00:52:34 crc kubenswrapper[4858]: I0218 00:52:34.875858 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-sllmm" Feb 18 00:52:35 crc kubenswrapper[4858]: I0218 00:52:35.002073 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb91d1f2-1b80-4082-a0a9-067ccadcc3a5-operator-scripts\") pod \"eb91d1f2-1b80-4082-a0a9-067ccadcc3a5\" (UID: \"eb91d1f2-1b80-4082-a0a9-067ccadcc3a5\") " Feb 18 00:52:35 crc kubenswrapper[4858]: I0218 00:52:35.002358 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmtn9\" (UniqueName: \"kubernetes.io/projected/1cf515e7-1bb4-4a22-baf6-932d935e26d5-kube-api-access-kmtn9\") pod \"1cf515e7-1bb4-4a22-baf6-932d935e26d5\" (UID: \"1cf515e7-1bb4-4a22-baf6-932d935e26d5\") " Feb 18 00:52:35 crc kubenswrapper[4858]: I0218 00:52:35.002488 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xlmb\" (UniqueName: \"kubernetes.io/projected/eb91d1f2-1b80-4082-a0a9-067ccadcc3a5-kube-api-access-6xlmb\") pod \"eb91d1f2-1b80-4082-a0a9-067ccadcc3a5\" (UID: \"eb91d1f2-1b80-4082-a0a9-067ccadcc3a5\") " Feb 18 00:52:35 crc kubenswrapper[4858]: I0218 00:52:35.002576 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1cf515e7-1bb4-4a22-baf6-932d935e26d5-operator-scripts\") pod \"1cf515e7-1bb4-4a22-baf6-932d935e26d5\" (UID: \"1cf515e7-1bb4-4a22-baf6-932d935e26d5\") " Feb 18 00:52:35 crc kubenswrapper[4858]: I0218 00:52:35.002909 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb91d1f2-1b80-4082-a0a9-067ccadcc3a5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb91d1f2-1b80-4082-a0a9-067ccadcc3a5" (UID: "eb91d1f2-1b80-4082-a0a9-067ccadcc3a5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:35 crc kubenswrapper[4858]: I0218 00:52:35.004210 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cf515e7-1bb4-4a22-baf6-932d935e26d5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1cf515e7-1bb4-4a22-baf6-932d935e26d5" (UID: "1cf515e7-1bb4-4a22-baf6-932d935e26d5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:35 crc kubenswrapper[4858]: I0218 00:52:35.006109 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb91d1f2-1b80-4082-a0a9-067ccadcc3a5-kube-api-access-6xlmb" (OuterVolumeSpecName: "kube-api-access-6xlmb") pod "eb91d1f2-1b80-4082-a0a9-067ccadcc3a5" (UID: "eb91d1f2-1b80-4082-a0a9-067ccadcc3a5"). InnerVolumeSpecName "kube-api-access-6xlmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:35 crc kubenswrapper[4858]: I0218 00:52:35.007010 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cf515e7-1bb4-4a22-baf6-932d935e26d5-kube-api-access-kmtn9" (OuterVolumeSpecName: "kube-api-access-kmtn9") pod "1cf515e7-1bb4-4a22-baf6-932d935e26d5" (UID: "1cf515e7-1bb4-4a22-baf6-932d935e26d5"). InnerVolumeSpecName "kube-api-access-kmtn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:35 crc kubenswrapper[4858]: I0218 00:52:35.104641 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xlmb\" (UniqueName: \"kubernetes.io/projected/eb91d1f2-1b80-4082-a0a9-067ccadcc3a5-kube-api-access-6xlmb\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:35 crc kubenswrapper[4858]: I0218 00:52:35.104688 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1cf515e7-1bb4-4a22-baf6-932d935e26d5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:35 crc kubenswrapper[4858]: I0218 00:52:35.104698 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb91d1f2-1b80-4082-a0a9-067ccadcc3a5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:35 crc kubenswrapper[4858]: I0218 00:52:35.104707 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmtn9\" (UniqueName: \"kubernetes.io/projected/1cf515e7-1bb4-4a22-baf6-932d935e26d5-kube-api-access-kmtn9\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:35 crc kubenswrapper[4858]: I0218 00:52:35.377983 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-sllmm" event={"ID":"1cf515e7-1bb4-4a22-baf6-932d935e26d5","Type":"ContainerDied","Data":"b3c50140ccc37d7f5be7fc34218e0bb320dd4867427077892ae383c50924361f"} Feb 18 00:52:35 crc kubenswrapper[4858]: I0218 00:52:35.378032 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3c50140ccc37d7f5be7fc34218e0bb320dd4867427077892ae383c50924361f" Feb 18 00:52:35 crc kubenswrapper[4858]: I0218 00:52:35.377991 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-sllmm" Feb 18 00:52:35 crc kubenswrapper[4858]: I0218 00:52:35.382336 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7e61127a-3243-441c-a9e5-8eafb19aeac5","Type":"ContainerStarted","Data":"2eddff1da6e74c3555d79c07cd119458004bc6ccf28a82d94235640d92e14dff"} Feb 18 00:52:35 crc kubenswrapper[4858]: I0218 00:52:35.382448 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7e61127a-3243-441c-a9e5-8eafb19aeac5","Type":"ContainerStarted","Data":"eabc09fb4c37d3fd7427c31c91c7a2d96fc80cc9d51b6ed5b5eef908ea87786f"} Feb 18 00:52:35 crc kubenswrapper[4858]: I0218 00:52:35.387244 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07","Type":"ContainerStarted","Data":"37c4df2d1aba6bc4dff0a0aca6fd79c4dd2fc01bbb9169f8dc2a9588c555260f"} Feb 18 00:52:35 crc kubenswrapper[4858]: I0218 00:52:35.387289 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07","Type":"ContainerStarted","Data":"daac2fef801e77a04523ee392de1fbb975f103c560492fb928e74ba5b0739078"} Feb 18 00:52:35 crc kubenswrapper[4858]: I0218 00:52:35.387299 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07","Type":"ContainerStarted","Data":"090fdc69f7270bbd899e68de6c9a50ef0637b9e454ba934717956dda8f4a8c88"} Feb 18 00:52:35 crc kubenswrapper[4858]: I0218 00:52:35.388566 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-f8p9c" event={"ID":"eb91d1f2-1b80-4082-a0a9-067ccadcc3a5","Type":"ContainerDied","Data":"86a45467c427802da5e438922b05e0d0a3e67c441a51187b879f11a95248806f"} Feb 18 00:52:35 crc kubenswrapper[4858]: I0218 00:52:35.388585 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86a45467c427802da5e438922b05e0d0a3e67c441a51187b879f11a95248806f" Feb 18 00:52:35 crc kubenswrapper[4858]: I0218 00:52:35.389865 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f8p9c" Feb 18 00:52:35 crc kubenswrapper[4858]: I0218 00:52:35.445614 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=14.445594996 podStartE2EDuration="14.445594996s" podCreationTimestamp="2026-02-18 00:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:52:35.415139218 +0000 UTC m=+1108.720975950" watchObservedRunningTime="2026-02-18 00:52:35.445594996 +0000 UTC m=+1108.751431728" Feb 18 00:52:36 crc kubenswrapper[4858]: I0218 00:52:36.074396 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-fvnsh" Feb 18 00:52:36 crc kubenswrapper[4858]: I0218 00:52:36.400972 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07","Type":"ContainerStarted","Data":"7589777e67a8f769474bf34db533e638ccbbc226e1afca8f447aecdf44cc2d41"} Feb 18 00:52:37 crc kubenswrapper[4858]: I0218 00:52:37.351364 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:37 crc kubenswrapper[4858]: I0218 00:52:37.351726 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:37 crc kubenswrapper[4858]: I0218 00:52:37.357857 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:37 crc kubenswrapper[4858]: I0218 00:52:37.412832 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 00:52:38 crc kubenswrapper[4858]: I0218 00:52:38.262105 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.006590 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-dbbf-account-create-update-bvhsg" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.012115 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z5j29" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.020136 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0196-account-create-update-6d2sl" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.072138 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pk7r2" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.072612 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c638-account-create-update-cw4s2" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.079747 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3e8e-account-create-update-sxgbt" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.110557 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtn57\" (UniqueName: \"kubernetes.io/projected/bcfae652-4782-4fce-85dd-1b25547d3189-kube-api-access-dtn57\") pod \"bcfae652-4782-4fce-85dd-1b25547d3189\" (UID: \"bcfae652-4782-4fce-85dd-1b25547d3189\") " Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.110598 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/831ee652-d7d7-4197-946d-ce5456dcc949-operator-scripts\") pod \"831ee652-d7d7-4197-946d-ce5456dcc949\" (UID: \"831ee652-d7d7-4197-946d-ce5456dcc949\") " Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.110684 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baf554d2-2987-45e7-9676-2139110e2781-operator-scripts\") pod \"baf554d2-2987-45e7-9676-2139110e2781\" (UID: \"baf554d2-2987-45e7-9676-2139110e2781\") " Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.110707 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d579ea77-2807-419f-b4f4-558b7cc1a09b-operator-scripts\") pod \"d579ea77-2807-419f-b4f4-558b7cc1a09b\" (UID: \"d579ea77-2807-419f-b4f4-558b7cc1a09b\") " Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.110723 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2xdx\" (UniqueName: \"kubernetes.io/projected/d579ea77-2807-419f-b4f4-558b7cc1a09b-kube-api-access-x2xdx\") pod \"d579ea77-2807-419f-b4f4-558b7cc1a09b\" (UID: \"d579ea77-2807-419f-b4f4-558b7cc1a09b\") " Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.110775 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4wsz\" (UniqueName: \"kubernetes.io/projected/baf554d2-2987-45e7-9676-2139110e2781-kube-api-access-z4wsz\") pod \"baf554d2-2987-45e7-9676-2139110e2781\" (UID: \"baf554d2-2987-45e7-9676-2139110e2781\") " Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.110829 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ed5ad30-acfc-4cff-8dfb-a0eb62046780-operator-scripts\") pod \"0ed5ad30-acfc-4cff-8dfb-a0eb62046780\" (UID: \"0ed5ad30-acfc-4cff-8dfb-a0eb62046780\") " Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.110890 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq2wk\" (UniqueName: \"kubernetes.io/projected/831ee652-d7d7-4197-946d-ce5456dcc949-kube-api-access-xq2wk\") pod \"831ee652-d7d7-4197-946d-ce5456dcc949\" (UID: \"831ee652-d7d7-4197-946d-ce5456dcc949\") " Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.110907 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x65dv\" (UniqueName: \"kubernetes.io/projected/0ed5ad30-acfc-4cff-8dfb-a0eb62046780-kube-api-access-x65dv\") pod \"0ed5ad30-acfc-4cff-8dfb-a0eb62046780\" (UID: \"0ed5ad30-acfc-4cff-8dfb-a0eb62046780\") " Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.110923 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcfae652-4782-4fce-85dd-1b25547d3189-operator-scripts\") pod \"bcfae652-4782-4fce-85dd-1b25547d3189\" (UID: \"bcfae652-4782-4fce-85dd-1b25547d3189\") " Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.111920 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcfae652-4782-4fce-85dd-1b25547d3189-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bcfae652-4782-4fce-85dd-1b25547d3189" (UID: "bcfae652-4782-4fce-85dd-1b25547d3189"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.112825 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/831ee652-d7d7-4197-946d-ce5456dcc949-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "831ee652-d7d7-4197-946d-ce5456dcc949" (UID: "831ee652-d7d7-4197-946d-ce5456dcc949"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.112839 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ed5ad30-acfc-4cff-8dfb-a0eb62046780-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ed5ad30-acfc-4cff-8dfb-a0eb62046780" (UID: "0ed5ad30-acfc-4cff-8dfb-a0eb62046780"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.113188 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baf554d2-2987-45e7-9676-2139110e2781-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "baf554d2-2987-45e7-9676-2139110e2781" (UID: "baf554d2-2987-45e7-9676-2139110e2781"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.113554 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d579ea77-2807-419f-b4f4-558b7cc1a09b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d579ea77-2807-419f-b4f4-558b7cc1a09b" (UID: "d579ea77-2807-419f-b4f4-558b7cc1a09b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.117773 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baf554d2-2987-45e7-9676-2139110e2781-kube-api-access-z4wsz" (OuterVolumeSpecName: "kube-api-access-z4wsz") pod "baf554d2-2987-45e7-9676-2139110e2781" (UID: "baf554d2-2987-45e7-9676-2139110e2781"). InnerVolumeSpecName "kube-api-access-z4wsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.120415 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d579ea77-2807-419f-b4f4-558b7cc1a09b-kube-api-access-x2xdx" (OuterVolumeSpecName: "kube-api-access-x2xdx") pod "d579ea77-2807-419f-b4f4-558b7cc1a09b" (UID: "d579ea77-2807-419f-b4f4-558b7cc1a09b"). InnerVolumeSpecName "kube-api-access-x2xdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.123307 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/831ee652-d7d7-4197-946d-ce5456dcc949-kube-api-access-xq2wk" (OuterVolumeSpecName: "kube-api-access-xq2wk") pod "831ee652-d7d7-4197-946d-ce5456dcc949" (UID: "831ee652-d7d7-4197-946d-ce5456dcc949"). InnerVolumeSpecName "kube-api-access-xq2wk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.123364 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ed5ad30-acfc-4cff-8dfb-a0eb62046780-kube-api-access-x65dv" (OuterVolumeSpecName: "kube-api-access-x65dv") pod "0ed5ad30-acfc-4cff-8dfb-a0eb62046780" (UID: "0ed5ad30-acfc-4cff-8dfb-a0eb62046780"). InnerVolumeSpecName "kube-api-access-x65dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.147691 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcfae652-4782-4fce-85dd-1b25547d3189-kube-api-access-dtn57" (OuterVolumeSpecName: "kube-api-access-dtn57") pod "bcfae652-4782-4fce-85dd-1b25547d3189" (UID: "bcfae652-4782-4fce-85dd-1b25547d3189"). InnerVolumeSpecName "kube-api-access-dtn57". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.212220 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac4a24af-9dad-4e95-a4c0-8296caee70ef-operator-scripts\") pod \"ac4a24af-9dad-4e95-a4c0-8296caee70ef\" (UID: \"ac4a24af-9dad-4e95-a4c0-8296caee70ef\") " Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.212305 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw94m\" (UniqueName: \"kubernetes.io/projected/ac4a24af-9dad-4e95-a4c0-8296caee70ef-kube-api-access-tw94m\") pod \"ac4a24af-9dad-4e95-a4c0-8296caee70ef\" (UID: \"ac4a24af-9dad-4e95-a4c0-8296caee70ef\") " Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.212708 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ed5ad30-acfc-4cff-8dfb-a0eb62046780-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.212723 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq2wk\" (UniqueName: \"kubernetes.io/projected/831ee652-d7d7-4197-946d-ce5456dcc949-kube-api-access-xq2wk\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.212734 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcfae652-4782-4fce-85dd-1b25547d3189-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.212742 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x65dv\" (UniqueName: \"kubernetes.io/projected/0ed5ad30-acfc-4cff-8dfb-a0eb62046780-kube-api-access-x65dv\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.212750 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtn57\" (UniqueName: \"kubernetes.io/projected/bcfae652-4782-4fce-85dd-1b25547d3189-kube-api-access-dtn57\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.212758 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/831ee652-d7d7-4197-946d-ce5456dcc949-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.212765 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baf554d2-2987-45e7-9676-2139110e2781-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.212774 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d579ea77-2807-419f-b4f4-558b7cc1a09b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.212782 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2xdx\" (UniqueName: \"kubernetes.io/projected/d579ea77-2807-419f-b4f4-558b7cc1a09b-kube-api-access-x2xdx\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.212790 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4wsz\" (UniqueName: \"kubernetes.io/projected/baf554d2-2987-45e7-9676-2139110e2781-kube-api-access-z4wsz\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.212789 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac4a24af-9dad-4e95-a4c0-8296caee70ef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ac4a24af-9dad-4e95-a4c0-8296caee70ef" (UID: "ac4a24af-9dad-4e95-a4c0-8296caee70ef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.215918 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac4a24af-9dad-4e95-a4c0-8296caee70ef-kube-api-access-tw94m" (OuterVolumeSpecName: "kube-api-access-tw94m") pod "ac4a24af-9dad-4e95-a4c0-8296caee70ef" (UID: "ac4a24af-9dad-4e95-a4c0-8296caee70ef"). InnerVolumeSpecName "kube-api-access-tw94m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.314949 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac4a24af-9dad-4e95-a4c0-8296caee70ef-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.314989 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw94m\" (UniqueName: \"kubernetes.io/projected/ac4a24af-9dad-4e95-a4c0-8296caee70ef-kube-api-access-tw94m\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.442773 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pk7r2" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.444029 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z5j29" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.445156 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c638-account-create-update-cw4s2" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.447291 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3e8e-account-create-update-sxgbt" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.452825 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0196-account-create-update-6d2sl" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.457052 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-bb8nr" podStartSLOduration=2.060619217 podStartE2EDuration="8.457031789s" podCreationTimestamp="2026-02-18 00:52:31 +0000 UTC" firstStartedPulling="2026-02-18 00:52:32.420278304 +0000 UTC m=+1105.726115036" lastFinishedPulling="2026-02-18 00:52:38.816690876 +0000 UTC m=+1112.122527608" observedRunningTime="2026-02-18 00:52:39.452623702 +0000 UTC m=+1112.758460444" watchObservedRunningTime="2026-02-18 00:52:39.457031789 +0000 UTC m=+1112.762868521" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.461123 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-dbbf-account-create-update-bvhsg" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.463883 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bb8nr" event={"ID":"03ab729d-962a-4c7b-8e72-ddf54dd2a69e","Type":"ContainerStarted","Data":"a7a11cfacf44a691b91f028c671ee7ec6da8c93f112d9cc143fe6c097cdc0c28"} Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.463919 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-pk7r2" event={"ID":"831ee652-d7d7-4197-946d-ce5456dcc949","Type":"ContainerDied","Data":"011b69bfcfc99b27f41fc7fe44d0b2c8b10fef4547823fe9f0b33bb8285512ca"} Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.463935 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="011b69bfcfc99b27f41fc7fe44d0b2c8b10fef4547823fe9f0b33bb8285512ca" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.463950 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-z5j29" event={"ID":"baf554d2-2987-45e7-9676-2139110e2781","Type":"ContainerDied","Data":"818e66890b3d5805b26d7de8ef0e8bf2dcabf0553029ee96f929599a379acfd3"} Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.463960 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="818e66890b3d5805b26d7de8ef0e8bf2dcabf0553029ee96f929599a379acfd3" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.463967 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c638-account-create-update-cw4s2" event={"ID":"bcfae652-4782-4fce-85dd-1b25547d3189","Type":"ContainerDied","Data":"0ffb615a367a6955ee35513db78b82e6160da827122508068c3ad2cefdef6ef8"} Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.463976 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ffb615a367a6955ee35513db78b82e6160da827122508068c3ad2cefdef6ef8" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.463983 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3e8e-account-create-update-sxgbt" event={"ID":"ac4a24af-9dad-4e95-a4c0-8296caee70ef","Type":"ContainerDied","Data":"52abe323eb64e067e7c4454422338eb603e65b9c2f33182dcb700e6ecff2f77f"} Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.463992 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52abe323eb64e067e7c4454422338eb603e65b9c2f33182dcb700e6ecff2f77f" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.464000 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0196-account-create-update-6d2sl" event={"ID":"d579ea77-2807-419f-b4f4-558b7cc1a09b","Type":"ContainerDied","Data":"b06ad148cb879493ad1534c40ae9adb200c7deb3aee44152a1ffd73176d32c79"} Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.464009 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b06ad148cb879493ad1534c40ae9adb200c7deb3aee44152a1ffd73176d32c79" Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.464016 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-dbbf-account-create-update-bvhsg" event={"ID":"0ed5ad30-acfc-4cff-8dfb-a0eb62046780","Type":"ContainerDied","Data":"847a2f6f6b35a87fc65170fce776576c1dd26bdacac74dae4ae71b681e3ce3a7"} Feb 18 00:52:39 crc kubenswrapper[4858]: I0218 00:52:39.464024 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="847a2f6f6b35a87fc65170fce776576c1dd26bdacac74dae4ae71b681e3ce3a7" Feb 18 00:52:40 crc kubenswrapper[4858]: I0218 00:52:40.473324 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07","Type":"ContainerStarted","Data":"23968dbbd07bfda6ca8d74e2114188b1ae38f37e81ce575318cb9bf15575d46e"} Feb 18 00:52:40 crc kubenswrapper[4858]: I0218 00:52:40.474126 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07","Type":"ContainerStarted","Data":"927b21ca4837825cddf8f2c0f397783288a623ef447aed0f88d1b1ec8fa8df5b"} Feb 18 00:52:40 crc kubenswrapper[4858]: I0218 00:52:40.474155 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07","Type":"ContainerStarted","Data":"998a99711d9a79f721fbc2c1a0776b0d6e59478c155e039a0518938e965d2820"} Feb 18 00:52:40 crc kubenswrapper[4858]: I0218 00:52:40.475753 4858 generic.go:334] "Generic (PLEG): container finished" podID="e80e88e1-21eb-46ff-9ee5-d22d3d589ecd" containerID="ce0c9cc1c5da391638527f8c880d54e1375c55a46e782617bf2c63d319921a9c" exitCode=0 Feb 18 00:52:40 crc kubenswrapper[4858]: I0218 00:52:40.475880 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gjnp4" event={"ID":"e80e88e1-21eb-46ff-9ee5-d22d3d589ecd","Type":"ContainerDied","Data":"ce0c9cc1c5da391638527f8c880d54e1375c55a46e782617bf2c63d319921a9c"} Feb 18 00:52:41 crc kubenswrapper[4858]: I0218 00:52:41.505547 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07","Type":"ContainerStarted","Data":"ee7e19c185d1af2595fc2e2cd36da98dbf7b2bd2eaab8b0f61820f432ca829e6"} Feb 18 00:52:42 crc kubenswrapper[4858]: I0218 00:52:42.042568 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gjnp4" Feb 18 00:52:42 crc kubenswrapper[4858]: I0218 00:52:42.176997 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80e88e1-21eb-46ff-9ee5-d22d3d589ecd-config-data\") pod \"e80e88e1-21eb-46ff-9ee5-d22d3d589ecd\" (UID: \"e80e88e1-21eb-46ff-9ee5-d22d3d589ecd\") " Feb 18 00:52:42 crc kubenswrapper[4858]: I0218 00:52:42.177414 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkr8p\" (UniqueName: \"kubernetes.io/projected/e80e88e1-21eb-46ff-9ee5-d22d3d589ecd-kube-api-access-hkr8p\") pod \"e80e88e1-21eb-46ff-9ee5-d22d3d589ecd\" (UID: \"e80e88e1-21eb-46ff-9ee5-d22d3d589ecd\") " Feb 18 00:52:42 crc kubenswrapper[4858]: I0218 00:52:42.177556 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e80e88e1-21eb-46ff-9ee5-d22d3d589ecd-db-sync-config-data\") pod \"e80e88e1-21eb-46ff-9ee5-d22d3d589ecd\" (UID: \"e80e88e1-21eb-46ff-9ee5-d22d3d589ecd\") " Feb 18 00:52:42 crc kubenswrapper[4858]: I0218 00:52:42.177610 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80e88e1-21eb-46ff-9ee5-d22d3d589ecd-combined-ca-bundle\") pod \"e80e88e1-21eb-46ff-9ee5-d22d3d589ecd\" (UID: \"e80e88e1-21eb-46ff-9ee5-d22d3d589ecd\") " Feb 18 00:52:42 crc kubenswrapper[4858]: I0218 00:52:42.183436 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e80e88e1-21eb-46ff-9ee5-d22d3d589ecd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e80e88e1-21eb-46ff-9ee5-d22d3d589ecd" (UID: "e80e88e1-21eb-46ff-9ee5-d22d3d589ecd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:52:42 crc kubenswrapper[4858]: I0218 00:52:42.183604 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e80e88e1-21eb-46ff-9ee5-d22d3d589ecd-kube-api-access-hkr8p" (OuterVolumeSpecName: "kube-api-access-hkr8p") pod "e80e88e1-21eb-46ff-9ee5-d22d3d589ecd" (UID: "e80e88e1-21eb-46ff-9ee5-d22d3d589ecd"). InnerVolumeSpecName "kube-api-access-hkr8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:42 crc kubenswrapper[4858]: I0218 00:52:42.215561 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e80e88e1-21eb-46ff-9ee5-d22d3d589ecd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e80e88e1-21eb-46ff-9ee5-d22d3d589ecd" (UID: "e80e88e1-21eb-46ff-9ee5-d22d3d589ecd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:52:42 crc kubenswrapper[4858]: I0218 00:52:42.237442 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e80e88e1-21eb-46ff-9ee5-d22d3d589ecd-config-data" (OuterVolumeSpecName: "config-data") pod "e80e88e1-21eb-46ff-9ee5-d22d3d589ecd" (UID: "e80e88e1-21eb-46ff-9ee5-d22d3d589ecd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:52:42 crc kubenswrapper[4858]: I0218 00:52:42.279709 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80e88e1-21eb-46ff-9ee5-d22d3d589ecd-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:42 crc kubenswrapper[4858]: I0218 00:52:42.279742 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkr8p\" (UniqueName: \"kubernetes.io/projected/e80e88e1-21eb-46ff-9ee5-d22d3d589ecd-kube-api-access-hkr8p\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:42 crc kubenswrapper[4858]: I0218 00:52:42.279752 4858 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e80e88e1-21eb-46ff-9ee5-d22d3d589ecd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:42 crc kubenswrapper[4858]: I0218 00:52:42.279760 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80e88e1-21eb-46ff-9ee5-d22d3d589ecd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:42 crc kubenswrapper[4858]: I0218 00:52:42.516568 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gjnp4" Feb 18 00:52:42 crc kubenswrapper[4858]: I0218 00:52:42.516552 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gjnp4" event={"ID":"e80e88e1-21eb-46ff-9ee5-d22d3d589ecd","Type":"ContainerDied","Data":"499ec87a115457c7b5b27b64e7dcd052f0b6e172d423820bcb2b6c026a6263e1"} Feb 18 00:52:42 crc kubenswrapper[4858]: I0218 00:52:42.516629 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="499ec87a115457c7b5b27b64e7dcd052f0b6e172d423820bcb2b6c026a6263e1" Feb 18 00:52:42 crc kubenswrapper[4858]: I0218 00:52:42.522845 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07","Type":"ContainerStarted","Data":"6da7cf40c239a3f3b1dc2b583700920ae0322191b4e359018ccfe77c2dc757f6"} Feb 18 00:52:42 crc kubenswrapper[4858]: I0218 00:52:42.523126 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07","Type":"ContainerStarted","Data":"0392e4a149eaf947c05eb401e73ca64598ca221dbb01acea14ced0fc958f1de5"} Feb 18 00:52:42 crc kubenswrapper[4858]: I0218 00:52:42.523140 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07","Type":"ContainerStarted","Data":"9b2ceebe2bccea720f4999a90d9fe39f4e518616d1e70a38dc27b41248d8d3b7"} Feb 18 00:52:42 crc kubenswrapper[4858]: I0218 00:52:42.525471 4858 generic.go:334] "Generic (PLEG): container finished" podID="03ab729d-962a-4c7b-8e72-ddf54dd2a69e" containerID="a7a11cfacf44a691b91f028c671ee7ec6da8c93f112d9cc143fe6c097cdc0c28" exitCode=0 Feb 18 00:52:42 crc kubenswrapper[4858]: I0218 00:52:42.525602 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bb8nr" event={"ID":"03ab729d-962a-4c7b-8e72-ddf54dd2a69e","Type":"ContainerDied","Data":"a7a11cfacf44a691b91f028c671ee7ec6da8c93f112d9cc143fe6c097cdc0c28"} Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.017255 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-4zw8v"] Feb 18 00:52:43 crc kubenswrapper[4858]: E0218 00:52:43.017584 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baf554d2-2987-45e7-9676-2139110e2781" containerName="mariadb-database-create" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.017601 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="baf554d2-2987-45e7-9676-2139110e2781" containerName="mariadb-database-create" Feb 18 00:52:43 crc kubenswrapper[4858]: E0218 00:52:43.017614 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="831ee652-d7d7-4197-946d-ce5456dcc949" containerName="mariadb-database-create" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.017620 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="831ee652-d7d7-4197-946d-ce5456dcc949" containerName="mariadb-database-create" Feb 18 00:52:43 crc kubenswrapper[4858]: E0218 00:52:43.017633 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e80e88e1-21eb-46ff-9ee5-d22d3d589ecd" containerName="glance-db-sync" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.017640 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e80e88e1-21eb-46ff-9ee5-d22d3d589ecd" containerName="glance-db-sync" Feb 18 00:52:43 crc kubenswrapper[4858]: E0218 00:52:43.017649 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ed5ad30-acfc-4cff-8dfb-a0eb62046780" containerName="mariadb-account-create-update" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.017655 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ed5ad30-acfc-4cff-8dfb-a0eb62046780" containerName="mariadb-account-create-update" Feb 18 00:52:43 crc kubenswrapper[4858]: E0218 00:52:43.017665 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cf515e7-1bb4-4a22-baf6-932d935e26d5" containerName="mariadb-database-create" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.017671 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cf515e7-1bb4-4a22-baf6-932d935e26d5" containerName="mariadb-database-create" Feb 18 00:52:43 crc kubenswrapper[4858]: E0218 00:52:43.017685 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8" containerName="mariadb-account-create-update" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.017691 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8" containerName="mariadb-account-create-update" Feb 18 00:52:43 crc kubenswrapper[4858]: E0218 00:52:43.017698 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb91d1f2-1b80-4082-a0a9-067ccadcc3a5" containerName="mariadb-database-create" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.017704 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb91d1f2-1b80-4082-a0a9-067ccadcc3a5" containerName="mariadb-database-create" Feb 18 00:52:43 crc kubenswrapper[4858]: E0218 00:52:43.017713 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d579ea77-2807-419f-b4f4-558b7cc1a09b" containerName="mariadb-account-create-update" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.017719 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d579ea77-2807-419f-b4f4-558b7cc1a09b" containerName="mariadb-account-create-update" Feb 18 00:52:43 crc kubenswrapper[4858]: E0218 00:52:43.017735 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcfae652-4782-4fce-85dd-1b25547d3189" containerName="mariadb-account-create-update" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.017742 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcfae652-4782-4fce-85dd-1b25547d3189" containerName="mariadb-account-create-update" Feb 18 00:52:43 crc kubenswrapper[4858]: E0218 00:52:43.017761 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac4a24af-9dad-4e95-a4c0-8296caee70ef" containerName="mariadb-account-create-update" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.017768 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac4a24af-9dad-4e95-a4c0-8296caee70ef" containerName="mariadb-account-create-update" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.017951 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ed5ad30-acfc-4cff-8dfb-a0eb62046780" containerName="mariadb-account-create-update" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.017971 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="baf554d2-2987-45e7-9676-2139110e2781" containerName="mariadb-database-create" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.017985 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d579ea77-2807-419f-b4f4-558b7cc1a09b" containerName="mariadb-account-create-update" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.018000 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8" containerName="mariadb-account-create-update" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.018012 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb91d1f2-1b80-4082-a0a9-067ccadcc3a5" containerName="mariadb-database-create" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.018024 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcfae652-4782-4fce-85dd-1b25547d3189" containerName="mariadb-account-create-update" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.018034 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e80e88e1-21eb-46ff-9ee5-d22d3d589ecd" containerName="glance-db-sync" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.018045 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac4a24af-9dad-4e95-a4c0-8296caee70ef" containerName="mariadb-account-create-update" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.018056 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cf515e7-1bb4-4a22-baf6-932d935e26d5" containerName="mariadb-database-create" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.018066 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="831ee652-d7d7-4197-946d-ce5456dcc949" containerName="mariadb-database-create" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.025002 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-4zw8v" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.050318 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-4zw8v"] Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.092430 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp68s\" (UniqueName: \"kubernetes.io/projected/0520f98b-3e86-4430-992a-256e63137028-kube-api-access-wp68s\") pod \"dnsmasq-dns-5b946c75cc-4zw8v\" (UID: \"0520f98b-3e86-4430-992a-256e63137028\") " pod="openstack/dnsmasq-dns-5b946c75cc-4zw8v" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.093167 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0520f98b-3e86-4430-992a-256e63137028-config\") pod \"dnsmasq-dns-5b946c75cc-4zw8v\" (UID: \"0520f98b-3e86-4430-992a-256e63137028\") " pod="openstack/dnsmasq-dns-5b946c75cc-4zw8v" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.093299 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0520f98b-3e86-4430-992a-256e63137028-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-4zw8v\" (UID: \"0520f98b-3e86-4430-992a-256e63137028\") " pod="openstack/dnsmasq-dns-5b946c75cc-4zw8v" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.093559 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0520f98b-3e86-4430-992a-256e63137028-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-4zw8v\" (UID: \"0520f98b-3e86-4430-992a-256e63137028\") " pod="openstack/dnsmasq-dns-5b946c75cc-4zw8v" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.093705 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0520f98b-3e86-4430-992a-256e63137028-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-4zw8v\" (UID: \"0520f98b-3e86-4430-992a-256e63137028\") " pod="openstack/dnsmasq-dns-5b946c75cc-4zw8v" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.195646 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp68s\" (UniqueName: \"kubernetes.io/projected/0520f98b-3e86-4430-992a-256e63137028-kube-api-access-wp68s\") pod \"dnsmasq-dns-5b946c75cc-4zw8v\" (UID: \"0520f98b-3e86-4430-992a-256e63137028\") " pod="openstack/dnsmasq-dns-5b946c75cc-4zw8v" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.195717 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0520f98b-3e86-4430-992a-256e63137028-config\") pod \"dnsmasq-dns-5b946c75cc-4zw8v\" (UID: \"0520f98b-3e86-4430-992a-256e63137028\") " pod="openstack/dnsmasq-dns-5b946c75cc-4zw8v" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.195744 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0520f98b-3e86-4430-992a-256e63137028-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-4zw8v\" (UID: \"0520f98b-3e86-4430-992a-256e63137028\") " pod="openstack/dnsmasq-dns-5b946c75cc-4zw8v" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.195794 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0520f98b-3e86-4430-992a-256e63137028-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-4zw8v\" (UID: \"0520f98b-3e86-4430-992a-256e63137028\") " pod="openstack/dnsmasq-dns-5b946c75cc-4zw8v" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.195840 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0520f98b-3e86-4430-992a-256e63137028-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-4zw8v\" (UID: \"0520f98b-3e86-4430-992a-256e63137028\") " pod="openstack/dnsmasq-dns-5b946c75cc-4zw8v" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.196639 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0520f98b-3e86-4430-992a-256e63137028-config\") pod \"dnsmasq-dns-5b946c75cc-4zw8v\" (UID: \"0520f98b-3e86-4430-992a-256e63137028\") " pod="openstack/dnsmasq-dns-5b946c75cc-4zw8v" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.196671 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0520f98b-3e86-4430-992a-256e63137028-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-4zw8v\" (UID: \"0520f98b-3e86-4430-992a-256e63137028\") " pod="openstack/dnsmasq-dns-5b946c75cc-4zw8v" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.196747 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0520f98b-3e86-4430-992a-256e63137028-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-4zw8v\" (UID: \"0520f98b-3e86-4430-992a-256e63137028\") " pod="openstack/dnsmasq-dns-5b946c75cc-4zw8v" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.196795 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0520f98b-3e86-4430-992a-256e63137028-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-4zw8v\" (UID: \"0520f98b-3e86-4430-992a-256e63137028\") " pod="openstack/dnsmasq-dns-5b946c75cc-4zw8v" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.214258 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp68s\" (UniqueName: \"kubernetes.io/projected/0520f98b-3e86-4430-992a-256e63137028-kube-api-access-wp68s\") pod \"dnsmasq-dns-5b946c75cc-4zw8v\" (UID: \"0520f98b-3e86-4430-992a-256e63137028\") " pod="openstack/dnsmasq-dns-5b946c75cc-4zw8v" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.350609 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-4zw8v" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.556559 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07","Type":"ContainerStarted","Data":"240855a5c92e6d14e6806a6fdc51dc040fab7dd4327c941fe21861bcf33f2247"} Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.556597 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07","Type":"ContainerStarted","Data":"7469e8cfdc1aa06658c3981313749c85b286001ed8ccd599631ab279b4ada7e5"} Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.556612 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07","Type":"ContainerStarted","Data":"20c4f962317fd99608fc27ef8fe15fcc41d4c10007bc74f8f2bc4c2250800d0a"} Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.556624 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d0600ce0-ec0e-48b8-b22e-7f94ffd40c07","Type":"ContainerStarted","Data":"26ba084b7325ca5a49e6b65437e376a1a69e57a3c8645bd2b7a7dde64ec58601"} Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.607338 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=38.213479077 podStartE2EDuration="47.607323598s" podCreationTimestamp="2026-02-18 00:51:56 +0000 UTC" firstStartedPulling="2026-02-18 00:52:32.156959494 +0000 UTC m=+1105.462796216" lastFinishedPulling="2026-02-18 00:52:41.550803995 +0000 UTC m=+1114.856640737" observedRunningTime="2026-02-18 00:52:43.607136203 +0000 UTC m=+1116.912972935" watchObservedRunningTime="2026-02-18 00:52:43.607323598 +0000 UTC m=+1116.913160330" Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.791023 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-4zw8v"] Feb 18 00:52:43 crc kubenswrapper[4858]: I0218 00:52:43.967972 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-4zw8v"] Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.000482 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-pjsmj"] Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.002386 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-pjsmj" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.005516 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.022138 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-pjsmj"] Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.126740 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-pjsmj\" (UID: \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pjsmj" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.126812 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-pjsmj\" (UID: \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pjsmj" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.126862 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-pjsmj\" (UID: \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pjsmj" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.126908 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-config\") pod \"dnsmasq-dns-74f6bcbc87-pjsmj\" (UID: \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pjsmj" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.127022 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvkbc\" (UniqueName: \"kubernetes.io/projected/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-kube-api-access-lvkbc\") pod \"dnsmasq-dns-74f6bcbc87-pjsmj\" (UID: \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pjsmj" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.127062 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-pjsmj\" (UID: \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pjsmj" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.229394 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-pjsmj\" (UID: \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pjsmj" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.229464 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-pjsmj\" (UID: \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pjsmj" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.229523 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-pjsmj\" (UID: \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pjsmj" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.229572 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-config\") pod \"dnsmasq-dns-74f6bcbc87-pjsmj\" (UID: \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pjsmj" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.229623 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvkbc\" (UniqueName: \"kubernetes.io/projected/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-kube-api-access-lvkbc\") pod \"dnsmasq-dns-74f6bcbc87-pjsmj\" (UID: \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pjsmj" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.229657 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-pjsmj\" (UID: \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pjsmj" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.231181 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-config\") pod \"dnsmasq-dns-74f6bcbc87-pjsmj\" (UID: \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pjsmj" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.231179 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-pjsmj\" (UID: \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pjsmj" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.231570 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-pjsmj\" (UID: \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pjsmj" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.231861 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-pjsmj\" (UID: \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pjsmj" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.232187 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-pjsmj\" (UID: \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pjsmj" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.263284 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvkbc\" (UniqueName: \"kubernetes.io/projected/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-kube-api-access-lvkbc\") pod \"dnsmasq-dns-74f6bcbc87-pjsmj\" (UID: \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pjsmj" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.323829 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-pjsmj" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.519639 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bb8nr" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.573400 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bb8nr" event={"ID":"03ab729d-962a-4c7b-8e72-ddf54dd2a69e","Type":"ContainerDied","Data":"8b2bffb8a5957ea50280765a4ad6d9bd8cf8347fd9e7d73aeb25c58a67fee7fd"} Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.573440 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b2bffb8a5957ea50280765a4ad6d9bd8cf8347fd9e7d73aeb25c58a67fee7fd" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.573512 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bb8nr" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.580897 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-4zw8v" event={"ID":"0520f98b-3e86-4430-992a-256e63137028","Type":"ContainerStarted","Data":"c849f6a86202e92550a7cbc9570c52989d82e5322f138059702cb3cb2bb0bb6b"} Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.636174 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ab729d-962a-4c7b-8e72-ddf54dd2a69e-combined-ca-bundle\") pod \"03ab729d-962a-4c7b-8e72-ddf54dd2a69e\" (UID: \"03ab729d-962a-4c7b-8e72-ddf54dd2a69e\") " Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.636350 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ab729d-962a-4c7b-8e72-ddf54dd2a69e-config-data\") pod \"03ab729d-962a-4c7b-8e72-ddf54dd2a69e\" (UID: \"03ab729d-962a-4c7b-8e72-ddf54dd2a69e\") " Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.636450 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89qxq\" (UniqueName: \"kubernetes.io/projected/03ab729d-962a-4c7b-8e72-ddf54dd2a69e-kube-api-access-89qxq\") pod \"03ab729d-962a-4c7b-8e72-ddf54dd2a69e\" (UID: \"03ab729d-962a-4c7b-8e72-ddf54dd2a69e\") " Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.652778 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03ab729d-962a-4c7b-8e72-ddf54dd2a69e-kube-api-access-89qxq" (OuterVolumeSpecName: "kube-api-access-89qxq") pod "03ab729d-962a-4c7b-8e72-ddf54dd2a69e" (UID: "03ab729d-962a-4c7b-8e72-ddf54dd2a69e"). InnerVolumeSpecName "kube-api-access-89qxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.677403 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ab729d-962a-4c7b-8e72-ddf54dd2a69e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03ab729d-962a-4c7b-8e72-ddf54dd2a69e" (UID: "03ab729d-962a-4c7b-8e72-ddf54dd2a69e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.703693 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ab729d-962a-4c7b-8e72-ddf54dd2a69e-config-data" (OuterVolumeSpecName: "config-data") pod "03ab729d-962a-4c7b-8e72-ddf54dd2a69e" (UID: "03ab729d-962a-4c7b-8e72-ddf54dd2a69e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.738405 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ab729d-962a-4c7b-8e72-ddf54dd2a69e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.738442 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ab729d-962a-4c7b-8e72-ddf54dd2a69e-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.738454 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89qxq\" (UniqueName: \"kubernetes.io/projected/03ab729d-962a-4c7b-8e72-ddf54dd2a69e-kube-api-access-89qxq\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.804285 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-pjsmj"] Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.835188 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-pgd4z"] Feb 18 00:52:44 crc kubenswrapper[4858]: E0218 00:52:44.835614 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ab729d-962a-4c7b-8e72-ddf54dd2a69e" containerName="keystone-db-sync" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.835630 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ab729d-962a-4c7b-8e72-ddf54dd2a69e" containerName="keystone-db-sync" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.835814 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="03ab729d-962a-4c7b-8e72-ddf54dd2a69e" containerName="keystone-db-sync" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.836821 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-pgd4z" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.841896 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-k8lfm"] Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.843178 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k8lfm" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.857320 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-pgd4z"] Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.875642 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.896766 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-k8lfm"] Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.921658 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-pjsmj"] Feb 18 00:52:44 crc kubenswrapper[4858]: W0218 00:52:44.922011 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00cb9158_ed0a_4e63_bef9_ac2fad834ccf.slice/crio-e59c059c109f971e866475d6fe38122812ab273a5a2a549f176ec6757f61332c WatchSource:0}: Error finding container e59c059c109f971e866475d6fe38122812ab273a5a2a549f176ec6757f61332c: Status 404 returned error can't find the container with id e59c059c109f971e866475d6fe38122812ab273a5a2a549f176ec6757f61332c Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.943286 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-config\") pod \"dnsmasq-dns-847c4cc679-pgd4z\" (UID: \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\") " pod="openstack/dnsmasq-dns-847c4cc679-pgd4z" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.943337 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-dns-svc\") pod \"dnsmasq-dns-847c4cc679-pgd4z\" (UID: \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\") " pod="openstack/dnsmasq-dns-847c4cc679-pgd4z" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.943387 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tx4g\" (UniqueName: \"kubernetes.io/projected/76c29272-a1d7-4036-83f9-0907f311ca4d-kube-api-access-5tx4g\") pod \"keystone-bootstrap-k8lfm\" (UID: \"76c29272-a1d7-4036-83f9-0907f311ca4d\") " pod="openstack/keystone-bootstrap-k8lfm" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.943416 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-scripts\") pod \"keystone-bootstrap-k8lfm\" (UID: \"76c29272-a1d7-4036-83f9-0907f311ca4d\") " pod="openstack/keystone-bootstrap-k8lfm" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.943438 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-credential-keys\") pod \"keystone-bootstrap-k8lfm\" (UID: \"76c29272-a1d7-4036-83f9-0907f311ca4d\") " pod="openstack/keystone-bootstrap-k8lfm" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.943473 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-fernet-keys\") pod \"keystone-bootstrap-k8lfm\" (UID: \"76c29272-a1d7-4036-83f9-0907f311ca4d\") " pod="openstack/keystone-bootstrap-k8lfm" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.943564 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjchb\" (UniqueName: \"kubernetes.io/projected/4eb53ef6-624b-4a11-90e6-85818e04c3bf-kube-api-access-jjchb\") pod \"dnsmasq-dns-847c4cc679-pgd4z\" (UID: \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\") " pod="openstack/dnsmasq-dns-847c4cc679-pgd4z" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.943592 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-pgd4z\" (UID: \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\") " pod="openstack/dnsmasq-dns-847c4cc679-pgd4z" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.943626 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-config-data\") pod \"keystone-bootstrap-k8lfm\" (UID: \"76c29272-a1d7-4036-83f9-0907f311ca4d\") " pod="openstack/keystone-bootstrap-k8lfm" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.943649 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-pgd4z\" (UID: \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\") " pod="openstack/dnsmasq-dns-847c4cc679-pgd4z" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.943748 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-combined-ca-bundle\") pod \"keystone-bootstrap-k8lfm\" (UID: \"76c29272-a1d7-4036-83f9-0907f311ca4d\") " pod="openstack/keystone-bootstrap-k8lfm" Feb 18 00:52:44 crc kubenswrapper[4858]: I0218 00:52:44.943803 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-pgd4z\" (UID: \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\") " pod="openstack/dnsmasq-dns-847c4cc679-pgd4z" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.047114 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tx4g\" (UniqueName: \"kubernetes.io/projected/76c29272-a1d7-4036-83f9-0907f311ca4d-kube-api-access-5tx4g\") pod \"keystone-bootstrap-k8lfm\" (UID: \"76c29272-a1d7-4036-83f9-0907f311ca4d\") " pod="openstack/keystone-bootstrap-k8lfm" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.047167 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-scripts\") pod \"keystone-bootstrap-k8lfm\" (UID: \"76c29272-a1d7-4036-83f9-0907f311ca4d\") " pod="openstack/keystone-bootstrap-k8lfm" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.047187 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-credential-keys\") pod \"keystone-bootstrap-k8lfm\" (UID: \"76c29272-a1d7-4036-83f9-0907f311ca4d\") " pod="openstack/keystone-bootstrap-k8lfm" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.047221 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-fernet-keys\") pod \"keystone-bootstrap-k8lfm\" (UID: \"76c29272-a1d7-4036-83f9-0907f311ca4d\") " pod="openstack/keystone-bootstrap-k8lfm" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.047296 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjchb\" (UniqueName: \"kubernetes.io/projected/4eb53ef6-624b-4a11-90e6-85818e04c3bf-kube-api-access-jjchb\") pod \"dnsmasq-dns-847c4cc679-pgd4z\" (UID: \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\") " pod="openstack/dnsmasq-dns-847c4cc679-pgd4z" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.047316 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-pgd4z\" (UID: \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\") " pod="openstack/dnsmasq-dns-847c4cc679-pgd4z" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.047337 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-config-data\") pod \"keystone-bootstrap-k8lfm\" (UID: \"76c29272-a1d7-4036-83f9-0907f311ca4d\") " pod="openstack/keystone-bootstrap-k8lfm" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.047355 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-pgd4z\" (UID: \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\") " pod="openstack/dnsmasq-dns-847c4cc679-pgd4z" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.047383 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-combined-ca-bundle\") pod \"keystone-bootstrap-k8lfm\" (UID: \"76c29272-a1d7-4036-83f9-0907f311ca4d\") " pod="openstack/keystone-bootstrap-k8lfm" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.047410 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-pgd4z\" (UID: \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\") " pod="openstack/dnsmasq-dns-847c4cc679-pgd4z" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.047459 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-config\") pod \"dnsmasq-dns-847c4cc679-pgd4z\" (UID: \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\") " pod="openstack/dnsmasq-dns-847c4cc679-pgd4z" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.047483 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-dns-svc\") pod \"dnsmasq-dns-847c4cc679-pgd4z\" (UID: \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\") " pod="openstack/dnsmasq-dns-847c4cc679-pgd4z" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.049252 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-pgd4z\" (UID: \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\") " pod="openstack/dnsmasq-dns-847c4cc679-pgd4z" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.049750 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-dns-svc\") pod \"dnsmasq-dns-847c4cc679-pgd4z\" (UID: \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\") " pod="openstack/dnsmasq-dns-847c4cc679-pgd4z" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.051516 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-config\") pod \"dnsmasq-dns-847c4cc679-pgd4z\" (UID: \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\") " pod="openstack/dnsmasq-dns-847c4cc679-pgd4z" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.051519 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-pgd4z\" (UID: \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\") " pod="openstack/dnsmasq-dns-847c4cc679-pgd4z" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.051573 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-pgd4z\" (UID: \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\") " pod="openstack/dnsmasq-dns-847c4cc679-pgd4z" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.053880 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-bd9bf"] Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.059485 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-config-data\") pod \"keystone-bootstrap-k8lfm\" (UID: \"76c29272-a1d7-4036-83f9-0907f311ca4d\") " pod="openstack/keystone-bootstrap-k8lfm" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.066352 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bd9bf" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.070923 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-scripts\") pod \"keystone-bootstrap-k8lfm\" (UID: \"76c29272-a1d7-4036-83f9-0907f311ca4d\") " pod="openstack/keystone-bootstrap-k8lfm" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.072081 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-bd9bf"] Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.075745 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-fernet-keys\") pod \"keystone-bootstrap-k8lfm\" (UID: \"76c29272-a1d7-4036-83f9-0907f311ca4d\") " pod="openstack/keystone-bootstrap-k8lfm" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.075870 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.081929 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-credential-keys\") pod \"keystone-bootstrap-k8lfm\" (UID: \"76c29272-a1d7-4036-83f9-0907f311ca4d\") " pod="openstack/keystone-bootstrap-k8lfm" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.089361 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjchb\" (UniqueName: \"kubernetes.io/projected/4eb53ef6-624b-4a11-90e6-85818e04c3bf-kube-api-access-jjchb\") pod \"dnsmasq-dns-847c4cc679-pgd4z\" (UID: \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\") " pod="openstack/dnsmasq-dns-847c4cc679-pgd4z" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.097127 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-combined-ca-bundle\") pod \"keystone-bootstrap-k8lfm\" (UID: \"76c29272-a1d7-4036-83f9-0907f311ca4d\") " pod="openstack/keystone-bootstrap-k8lfm" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.075925 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-5bqrz" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.076051 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.104029 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tx4g\" (UniqueName: \"kubernetes.io/projected/76c29272-a1d7-4036-83f9-0907f311ca4d-kube-api-access-5tx4g\") pod \"keystone-bootstrap-k8lfm\" (UID: \"76c29272-a1d7-4036-83f9-0907f311ca4d\") " pod="openstack/keystone-bootstrap-k8lfm" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.117160 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k8lfm" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.117405 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-pgd4z" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.161428 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/373a01de-9360-4b5b-8f80-fdfc987dddae-config\") pod \"neutron-db-sync-bd9bf\" (UID: \"373a01de-9360-4b5b-8f80-fdfc987dddae\") " pod="openstack/neutron-db-sync-bd9bf" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.161593 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/373a01de-9360-4b5b-8f80-fdfc987dddae-combined-ca-bundle\") pod \"neutron-db-sync-bd9bf\" (UID: \"373a01de-9360-4b5b-8f80-fdfc987dddae\") " pod="openstack/neutron-db-sync-bd9bf" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.161783 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lxm7\" (UniqueName: \"kubernetes.io/projected/373a01de-9360-4b5b-8f80-fdfc987dddae-kube-api-access-9lxm7\") pod \"neutron-db-sync-bd9bf\" (UID: \"373a01de-9360-4b5b-8f80-fdfc987dddae\") " pod="openstack/neutron-db-sync-bd9bf" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.167937 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.169905 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.185623 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.185819 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.228261 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.270089 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4770fea7-6a1a-44e8-bebe-09220dbc4c71-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " pod="openstack/ceilometer-0" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.270153 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4770fea7-6a1a-44e8-bebe-09220dbc4c71-run-httpd\") pod \"ceilometer-0\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " pod="openstack/ceilometer-0" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.270184 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/373a01de-9360-4b5b-8f80-fdfc987dddae-config\") pod \"neutron-db-sync-bd9bf\" (UID: \"373a01de-9360-4b5b-8f80-fdfc987dddae\") " pod="openstack/neutron-db-sync-bd9bf" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.270281 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh4kz\" (UniqueName: \"kubernetes.io/projected/4770fea7-6a1a-44e8-bebe-09220dbc4c71-kube-api-access-mh4kz\") pod \"ceilometer-0\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " pod="openstack/ceilometer-0" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.270363 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/373a01de-9360-4b5b-8f80-fdfc987dddae-combined-ca-bundle\") pod \"neutron-db-sync-bd9bf\" (UID: \"373a01de-9360-4b5b-8f80-fdfc987dddae\") " pod="openstack/neutron-db-sync-bd9bf" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.270410 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4770fea7-6a1a-44e8-bebe-09220dbc4c71-config-data\") pod \"ceilometer-0\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " pod="openstack/ceilometer-0" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.270466 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4770fea7-6a1a-44e8-bebe-09220dbc4c71-log-httpd\") pod \"ceilometer-0\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " pod="openstack/ceilometer-0" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.270582 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lxm7\" (UniqueName: \"kubernetes.io/projected/373a01de-9360-4b5b-8f80-fdfc987dddae-kube-api-access-9lxm7\") pod \"neutron-db-sync-bd9bf\" (UID: \"373a01de-9360-4b5b-8f80-fdfc987dddae\") " pod="openstack/neutron-db-sync-bd9bf" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.270608 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4770fea7-6a1a-44e8-bebe-09220dbc4c71-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " pod="openstack/ceilometer-0" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.270753 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4770fea7-6a1a-44e8-bebe-09220dbc4c71-scripts\") pod \"ceilometer-0\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " pod="openstack/ceilometer-0" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.313994 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/373a01de-9360-4b5b-8f80-fdfc987dddae-combined-ca-bundle\") pod \"neutron-db-sync-bd9bf\" (UID: \"373a01de-9360-4b5b-8f80-fdfc987dddae\") " pod="openstack/neutron-db-sync-bd9bf" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.314668 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/373a01de-9360-4b5b-8f80-fdfc987dddae-config\") pod \"neutron-db-sync-bd9bf\" (UID: \"373a01de-9360-4b5b-8f80-fdfc987dddae\") " pod="openstack/neutron-db-sync-bd9bf" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.321996 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lxm7\" (UniqueName: \"kubernetes.io/projected/373a01de-9360-4b5b-8f80-fdfc987dddae-kube-api-access-9lxm7\") pod \"neutron-db-sync-bd9bf\" (UID: \"373a01de-9360-4b5b-8f80-fdfc987dddae\") " pod="openstack/neutron-db-sync-bd9bf" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.328186 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-wrqgb"] Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.332888 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wrqgb" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.337932 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-nt9wm" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.360661 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.360868 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.365213 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-x4wqp"] Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.366690 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x4wqp" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.372860 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f69b36cb-f694-4e90-b673-47681459414b-scripts\") pod \"cinder-db-sync-wrqgb\" (UID: \"f69b36cb-f694-4e90-b673-47681459414b\") " pod="openstack/cinder-db-sync-wrqgb" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.373201 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4770fea7-6a1a-44e8-bebe-09220dbc4c71-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " pod="openstack/ceilometer-0" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.373230 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4770fea7-6a1a-44e8-bebe-09220dbc4c71-run-httpd\") pod \"ceilometer-0\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " pod="openstack/ceilometer-0" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.373264 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv8mb\" (UniqueName: \"kubernetes.io/projected/f69b36cb-f694-4e90-b673-47681459414b-kube-api-access-dv8mb\") pod \"cinder-db-sync-wrqgb\" (UID: \"f69b36cb-f694-4e90-b673-47681459414b\") " pod="openstack/cinder-db-sync-wrqgb" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.373319 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh4kz\" (UniqueName: \"kubernetes.io/projected/4770fea7-6a1a-44e8-bebe-09220dbc4c71-kube-api-access-mh4kz\") pod \"ceilometer-0\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " pod="openstack/ceilometer-0" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.373356 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4770fea7-6a1a-44e8-bebe-09220dbc4c71-config-data\") pod \"ceilometer-0\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " pod="openstack/ceilometer-0" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.373401 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4770fea7-6a1a-44e8-bebe-09220dbc4c71-log-httpd\") pod \"ceilometer-0\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " pod="openstack/ceilometer-0" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.373435 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f69b36cb-f694-4e90-b673-47681459414b-db-sync-config-data\") pod \"cinder-db-sync-wrqgb\" (UID: \"f69b36cb-f694-4e90-b673-47681459414b\") " pod="openstack/cinder-db-sync-wrqgb" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.373458 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69b36cb-f694-4e90-b673-47681459414b-config-data\") pod \"cinder-db-sync-wrqgb\" (UID: \"f69b36cb-f694-4e90-b673-47681459414b\") " pod="openstack/cinder-db-sync-wrqgb" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.375067 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4770fea7-6a1a-44e8-bebe-09220dbc4c71-run-httpd\") pod \"ceilometer-0\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " pod="openstack/ceilometer-0" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.377879 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.378543 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-vpb4j" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.383332 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4770fea7-6a1a-44e8-bebe-09220dbc4c71-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " pod="openstack/ceilometer-0" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.383605 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69b36cb-f694-4e90-b673-47681459414b-combined-ca-bundle\") pod \"cinder-db-sync-wrqgb\" (UID: \"f69b36cb-f694-4e90-b673-47681459414b\") " pod="openstack/cinder-db-sync-wrqgb" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.383735 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4770fea7-6a1a-44e8-bebe-09220dbc4c71-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " pod="openstack/ceilometer-0" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.383800 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4770fea7-6a1a-44e8-bebe-09220dbc4c71-log-httpd\") pod \"ceilometer-0\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " pod="openstack/ceilometer-0" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.383884 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4770fea7-6a1a-44e8-bebe-09220dbc4c71-scripts\") pod \"ceilometer-0\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " pod="openstack/ceilometer-0" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.383915 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f69b36cb-f694-4e90-b673-47681459414b-etc-machine-id\") pod \"cinder-db-sync-wrqgb\" (UID: \"f69b36cb-f694-4e90-b673-47681459414b\") " pod="openstack/cinder-db-sync-wrqgb" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.390632 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4770fea7-6a1a-44e8-bebe-09220dbc4c71-scripts\") pod \"ceilometer-0\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " pod="openstack/ceilometer-0" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.391305 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4770fea7-6a1a-44e8-bebe-09220dbc4c71-config-data\") pod \"ceilometer-0\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " pod="openstack/ceilometer-0" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.391791 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4770fea7-6a1a-44e8-bebe-09220dbc4c71-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " pod="openstack/ceilometer-0" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.402323 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh4kz\" (UniqueName: \"kubernetes.io/projected/4770fea7-6a1a-44e8-bebe-09220dbc4c71-kube-api-access-mh4kz\") pod \"ceilometer-0\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " pod="openstack/ceilometer-0" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.447476 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-bpmww"] Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.448735 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-wrqgb"] Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.448834 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-bpmww" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.450314 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bd9bf" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.453260 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-x4wqp"] Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.453584 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.456784 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.456932 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-jbnsw" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.457046 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.472193 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-bpmww"] Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.485755 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/48a7b55c-92f4-41e7-b862-45eadd76013b-certs\") pod \"cloudkitty-db-sync-bpmww\" (UID: \"48a7b55c-92f4-41e7-b862-45eadd76013b\") " pod="openstack/cloudkitty-db-sync-bpmww" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.485800 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xbk4\" (UniqueName: \"kubernetes.io/projected/423548cb-6c87-4876-a08c-fd64805971ea-kube-api-access-4xbk4\") pod \"barbican-db-sync-x4wqp\" (UID: \"423548cb-6c87-4876-a08c-fd64805971ea\") " pod="openstack/barbican-db-sync-x4wqp" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.485834 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f69b36cb-f694-4e90-b673-47681459414b-db-sync-config-data\") pod \"cinder-db-sync-wrqgb\" (UID: \"f69b36cb-f694-4e90-b673-47681459414b\") " pod="openstack/cinder-db-sync-wrqgb" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.485850 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69b36cb-f694-4e90-b673-47681459414b-config-data\") pod \"cinder-db-sync-wrqgb\" (UID: \"f69b36cb-f694-4e90-b673-47681459414b\") " pod="openstack/cinder-db-sync-wrqgb" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.485871 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/423548cb-6c87-4876-a08c-fd64805971ea-db-sync-config-data\") pod \"barbican-db-sync-x4wqp\" (UID: \"423548cb-6c87-4876-a08c-fd64805971ea\") " pod="openstack/barbican-db-sync-x4wqp" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.485896 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69b36cb-f694-4e90-b673-47681459414b-combined-ca-bundle\") pod \"cinder-db-sync-wrqgb\" (UID: \"f69b36cb-f694-4e90-b673-47681459414b\") " pod="openstack/cinder-db-sync-wrqgb" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.485935 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkksg\" (UniqueName: \"kubernetes.io/projected/48a7b55c-92f4-41e7-b862-45eadd76013b-kube-api-access-vkksg\") pod \"cloudkitty-db-sync-bpmww\" (UID: \"48a7b55c-92f4-41e7-b862-45eadd76013b\") " pod="openstack/cloudkitty-db-sync-bpmww" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.485951 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a7b55c-92f4-41e7-b862-45eadd76013b-combined-ca-bundle\") pod \"cloudkitty-db-sync-bpmww\" (UID: \"48a7b55c-92f4-41e7-b862-45eadd76013b\") " pod="openstack/cloudkitty-db-sync-bpmww" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.485975 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423548cb-6c87-4876-a08c-fd64805971ea-combined-ca-bundle\") pod \"barbican-db-sync-x4wqp\" (UID: \"423548cb-6c87-4876-a08c-fd64805971ea\") " pod="openstack/barbican-db-sync-x4wqp" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.486006 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f69b36cb-f694-4e90-b673-47681459414b-etc-machine-id\") pod \"cinder-db-sync-wrqgb\" (UID: \"f69b36cb-f694-4e90-b673-47681459414b\") " pod="openstack/cinder-db-sync-wrqgb" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.486022 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a7b55c-92f4-41e7-b862-45eadd76013b-config-data\") pod \"cloudkitty-db-sync-bpmww\" (UID: \"48a7b55c-92f4-41e7-b862-45eadd76013b\") " pod="openstack/cloudkitty-db-sync-bpmww" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.486052 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f69b36cb-f694-4e90-b673-47681459414b-scripts\") pod \"cinder-db-sync-wrqgb\" (UID: \"f69b36cb-f694-4e90-b673-47681459414b\") " pod="openstack/cinder-db-sync-wrqgb" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.486072 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48a7b55c-92f4-41e7-b862-45eadd76013b-scripts\") pod \"cloudkitty-db-sync-bpmww\" (UID: \"48a7b55c-92f4-41e7-b862-45eadd76013b\") " pod="openstack/cloudkitty-db-sync-bpmww" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.486094 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv8mb\" (UniqueName: \"kubernetes.io/projected/f69b36cb-f694-4e90-b673-47681459414b-kube-api-access-dv8mb\") pod \"cinder-db-sync-wrqgb\" (UID: \"f69b36cb-f694-4e90-b673-47681459414b\") " pod="openstack/cinder-db-sync-wrqgb" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.487576 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f69b36cb-f694-4e90-b673-47681459414b-etc-machine-id\") pod \"cinder-db-sync-wrqgb\" (UID: \"f69b36cb-f694-4e90-b673-47681459414b\") " pod="openstack/cinder-db-sync-wrqgb" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.493640 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-k2wn6"] Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.494924 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-k2wn6" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.495379 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69b36cb-f694-4e90-b673-47681459414b-combined-ca-bundle\") pod \"cinder-db-sync-wrqgb\" (UID: \"f69b36cb-f694-4e90-b673-47681459414b\") " pod="openstack/cinder-db-sync-wrqgb" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.497776 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f69b36cb-f694-4e90-b673-47681459414b-db-sync-config-data\") pod \"cinder-db-sync-wrqgb\" (UID: \"f69b36cb-f694-4e90-b673-47681459414b\") " pod="openstack/cinder-db-sync-wrqgb" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.498328 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.499414 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-vktvv" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.500151 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.503443 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-k2wn6"] Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.503937 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f69b36cb-f694-4e90-b673-47681459414b-scripts\") pod \"cinder-db-sync-wrqgb\" (UID: \"f69b36cb-f694-4e90-b673-47681459414b\") " pod="openstack/cinder-db-sync-wrqgb" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.504362 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69b36cb-f694-4e90-b673-47681459414b-config-data\") pod \"cinder-db-sync-wrqgb\" (UID: \"f69b36cb-f694-4e90-b673-47681459414b\") " pod="openstack/cinder-db-sync-wrqgb" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.520409 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv8mb\" (UniqueName: \"kubernetes.io/projected/f69b36cb-f694-4e90-b673-47681459414b-kube-api-access-dv8mb\") pod \"cinder-db-sync-wrqgb\" (UID: \"f69b36cb-f694-4e90-b673-47681459414b\") " pod="openstack/cinder-db-sync-wrqgb" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.529634 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-pgd4z"] Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.538288 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-4vw4g"] Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.541612 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.545987 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-4vw4g"] Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.569969 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.587170 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-4vw4g\" (UID: \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.587206 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkksg\" (UniqueName: \"kubernetes.io/projected/48a7b55c-92f4-41e7-b862-45eadd76013b-kube-api-access-vkksg\") pod \"cloudkitty-db-sync-bpmww\" (UID: \"48a7b55c-92f4-41e7-b862-45eadd76013b\") " pod="openstack/cloudkitty-db-sync-bpmww" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.587224 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a7b55c-92f4-41e7-b862-45eadd76013b-combined-ca-bundle\") pod \"cloudkitty-db-sync-bpmww\" (UID: \"48a7b55c-92f4-41e7-b862-45eadd76013b\") " pod="openstack/cloudkitty-db-sync-bpmww" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.587249 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423548cb-6c87-4876-a08c-fd64805971ea-combined-ca-bundle\") pod \"barbican-db-sync-x4wqp\" (UID: \"423548cb-6c87-4876-a08c-fd64805971ea\") " pod="openstack/barbican-db-sync-x4wqp" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.587267 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v47m\" (UniqueName: \"kubernetes.io/projected/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-kube-api-access-7v47m\") pod \"placement-db-sync-k2wn6\" (UID: \"8b6ceabb-aac4-48fc-9d11-abbedea94d2d\") " pod="openstack/placement-db-sync-k2wn6" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.587297 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a7b55c-92f4-41e7-b862-45eadd76013b-config-data\") pod \"cloudkitty-db-sync-bpmww\" (UID: \"48a7b55c-92f4-41e7-b862-45eadd76013b\") " pod="openstack/cloudkitty-db-sync-bpmww" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.587317 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-scripts\") pod \"placement-db-sync-k2wn6\" (UID: \"8b6ceabb-aac4-48fc-9d11-abbedea94d2d\") " pod="openstack/placement-db-sync-k2wn6" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.587340 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48a7b55c-92f4-41e7-b862-45eadd76013b-scripts\") pod \"cloudkitty-db-sync-bpmww\" (UID: \"48a7b55c-92f4-41e7-b862-45eadd76013b\") " pod="openstack/cloudkitty-db-sync-bpmww" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.587364 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzx9p\" (UniqueName: \"kubernetes.io/projected/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-kube-api-access-dzx9p\") pod \"dnsmasq-dns-785d8bcb8c-4vw4g\" (UID: \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.587388 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-4vw4g\" (UID: \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.587425 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-4vw4g\" (UID: \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.587442 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-logs\") pod \"placement-db-sync-k2wn6\" (UID: \"8b6ceabb-aac4-48fc-9d11-abbedea94d2d\") " pod="openstack/placement-db-sync-k2wn6" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.587459 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-config\") pod \"dnsmasq-dns-785d8bcb8c-4vw4g\" (UID: \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.587479 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-combined-ca-bundle\") pod \"placement-db-sync-k2wn6\" (UID: \"8b6ceabb-aac4-48fc-9d11-abbedea94d2d\") " pod="openstack/placement-db-sync-k2wn6" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.587510 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-4vw4g\" (UID: \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.587527 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/48a7b55c-92f4-41e7-b862-45eadd76013b-certs\") pod \"cloudkitty-db-sync-bpmww\" (UID: \"48a7b55c-92f4-41e7-b862-45eadd76013b\") " pod="openstack/cloudkitty-db-sync-bpmww" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.587544 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xbk4\" (UniqueName: \"kubernetes.io/projected/423548cb-6c87-4876-a08c-fd64805971ea-kube-api-access-4xbk4\") pod \"barbican-db-sync-x4wqp\" (UID: \"423548cb-6c87-4876-a08c-fd64805971ea\") " pod="openstack/barbican-db-sync-x4wqp" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.587575 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/423548cb-6c87-4876-a08c-fd64805971ea-db-sync-config-data\") pod \"barbican-db-sync-x4wqp\" (UID: \"423548cb-6c87-4876-a08c-fd64805971ea\") " pod="openstack/barbican-db-sync-x4wqp" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.587598 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-config-data\") pod \"placement-db-sync-k2wn6\" (UID: \"8b6ceabb-aac4-48fc-9d11-abbedea94d2d\") " pod="openstack/placement-db-sync-k2wn6" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.596223 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a7b55c-92f4-41e7-b862-45eadd76013b-combined-ca-bundle\") pod \"cloudkitty-db-sync-bpmww\" (UID: \"48a7b55c-92f4-41e7-b862-45eadd76013b\") " pod="openstack/cloudkitty-db-sync-bpmww" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.596758 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/423548cb-6c87-4876-a08c-fd64805971ea-db-sync-config-data\") pod \"barbican-db-sync-x4wqp\" (UID: \"423548cb-6c87-4876-a08c-fd64805971ea\") " pod="openstack/barbican-db-sync-x4wqp" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.596869 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48a7b55c-92f4-41e7-b862-45eadd76013b-scripts\") pod \"cloudkitty-db-sync-bpmww\" (UID: \"48a7b55c-92f4-41e7-b862-45eadd76013b\") " pod="openstack/cloudkitty-db-sync-bpmww" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.599221 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423548cb-6c87-4876-a08c-fd64805971ea-combined-ca-bundle\") pod \"barbican-db-sync-x4wqp\" (UID: \"423548cb-6c87-4876-a08c-fd64805971ea\") " pod="openstack/barbican-db-sync-x4wqp" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.602441 4858 generic.go:334] "Generic (PLEG): container finished" podID="00cb9158-ed0a-4e63-bef9-ac2fad834ccf" containerID="fb3ec7f26d0896712ff07ca262ea903058fc3315883fe0cadbcb60a71d80e1d7" exitCode=0 Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.602533 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-pjsmj" event={"ID":"00cb9158-ed0a-4e63-bef9-ac2fad834ccf","Type":"ContainerDied","Data":"fb3ec7f26d0896712ff07ca262ea903058fc3315883fe0cadbcb60a71d80e1d7"} Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.602560 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-pjsmj" event={"ID":"00cb9158-ed0a-4e63-bef9-ac2fad834ccf","Type":"ContainerStarted","Data":"e59c059c109f971e866475d6fe38122812ab273a5a2a549f176ec6757f61332c"} Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.603811 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a7b55c-92f4-41e7-b862-45eadd76013b-config-data\") pod \"cloudkitty-db-sync-bpmww\" (UID: \"48a7b55c-92f4-41e7-b862-45eadd76013b\") " pod="openstack/cloudkitty-db-sync-bpmww" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.605316 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkksg\" (UniqueName: \"kubernetes.io/projected/48a7b55c-92f4-41e7-b862-45eadd76013b-kube-api-access-vkksg\") pod \"cloudkitty-db-sync-bpmww\" (UID: \"48a7b55c-92f4-41e7-b862-45eadd76013b\") " pod="openstack/cloudkitty-db-sync-bpmww" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.613985 4858 generic.go:334] "Generic (PLEG): container finished" podID="0520f98b-3e86-4430-992a-256e63137028" containerID="0364e9c52a6f7f04b68b00fafa750219df0a63394a5d080e3c7bc6b3d456fd36" exitCode=0 Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.614035 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-4zw8v" event={"ID":"0520f98b-3e86-4430-992a-256e63137028","Type":"ContainerDied","Data":"0364e9c52a6f7f04b68b00fafa750219df0a63394a5d080e3c7bc6b3d456fd36"} Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.615029 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xbk4\" (UniqueName: \"kubernetes.io/projected/423548cb-6c87-4876-a08c-fd64805971ea-kube-api-access-4xbk4\") pod \"barbican-db-sync-x4wqp\" (UID: \"423548cb-6c87-4876-a08c-fd64805971ea\") " pod="openstack/barbican-db-sync-x4wqp" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.622183 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/48a7b55c-92f4-41e7-b862-45eadd76013b-certs\") pod \"cloudkitty-db-sync-bpmww\" (UID: \"48a7b55c-92f4-41e7-b862-45eadd76013b\") " pod="openstack/cloudkitty-db-sync-bpmww" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.689589 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wrqgb" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.691950 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzx9p\" (UniqueName: \"kubernetes.io/projected/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-kube-api-access-dzx9p\") pod \"dnsmasq-dns-785d8bcb8c-4vw4g\" (UID: \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.692013 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-4vw4g\" (UID: \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.692060 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-4vw4g\" (UID: \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.692082 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-logs\") pod \"placement-db-sync-k2wn6\" (UID: \"8b6ceabb-aac4-48fc-9d11-abbedea94d2d\") " pod="openstack/placement-db-sync-k2wn6" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.692103 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-config\") pod \"dnsmasq-dns-785d8bcb8c-4vw4g\" (UID: \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.692133 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-combined-ca-bundle\") pod \"placement-db-sync-k2wn6\" (UID: \"8b6ceabb-aac4-48fc-9d11-abbedea94d2d\") " pod="openstack/placement-db-sync-k2wn6" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.692148 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-4vw4g\" (UID: \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.692213 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-config-data\") pod \"placement-db-sync-k2wn6\" (UID: \"8b6ceabb-aac4-48fc-9d11-abbedea94d2d\") " pod="openstack/placement-db-sync-k2wn6" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.692274 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-4vw4g\" (UID: \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.692311 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v47m\" (UniqueName: \"kubernetes.io/projected/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-kube-api-access-7v47m\") pod \"placement-db-sync-k2wn6\" (UID: \"8b6ceabb-aac4-48fc-9d11-abbedea94d2d\") " pod="openstack/placement-db-sync-k2wn6" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.692365 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-scripts\") pod \"placement-db-sync-k2wn6\" (UID: \"8b6ceabb-aac4-48fc-9d11-abbedea94d2d\") " pod="openstack/placement-db-sync-k2wn6" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.695218 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-config\") pod \"dnsmasq-dns-785d8bcb8c-4vw4g\" (UID: \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.695829 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-4vw4g\" (UID: \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.696268 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-logs\") pod \"placement-db-sync-k2wn6\" (UID: \"8b6ceabb-aac4-48fc-9d11-abbedea94d2d\") " pod="openstack/placement-db-sync-k2wn6" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.696594 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-4vw4g\" (UID: \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.696844 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-4vw4g\" (UID: \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.699603 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-scripts\") pod \"placement-db-sync-k2wn6\" (UID: \"8b6ceabb-aac4-48fc-9d11-abbedea94d2d\") " pod="openstack/placement-db-sync-k2wn6" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.699991 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-4vw4g\" (UID: \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.700391 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-config-data\") pod \"placement-db-sync-k2wn6\" (UID: \"8b6ceabb-aac4-48fc-9d11-abbedea94d2d\") " pod="openstack/placement-db-sync-k2wn6" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.708664 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-combined-ca-bundle\") pod \"placement-db-sync-k2wn6\" (UID: \"8b6ceabb-aac4-48fc-9d11-abbedea94d2d\") " pod="openstack/placement-db-sync-k2wn6" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.716000 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x4wqp" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.729150 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzx9p\" (UniqueName: \"kubernetes.io/projected/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-kube-api-access-dzx9p\") pod \"dnsmasq-dns-785d8bcb8c-4vw4g\" (UID: \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.730211 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v47m\" (UniqueName: \"kubernetes.io/projected/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-kube-api-access-7v47m\") pod \"placement-db-sync-k2wn6\" (UID: \"8b6ceabb-aac4-48fc-9d11-abbedea94d2d\") " pod="openstack/placement-db-sync-k2wn6" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.776825 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-bpmww" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.864418 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-k8lfm"] Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.878991 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-k2wn6" Feb 18 00:52:45 crc kubenswrapper[4858]: I0218 00:52:45.895024 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.004862 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.020868 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.024950 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.025536 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mlt6s" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.040912 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.050034 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.099209 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38c176f1-e9ba-4313-a720-6316085bcb41-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.099277 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c176f1-e9ba-4313-a720-6316085bcb41-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.099305 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\") pod \"glance-default-external-api-0\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.099336 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4g44\" (UniqueName: \"kubernetes.io/projected/38c176f1-e9ba-4313-a720-6316085bcb41-kube-api-access-s4g44\") pod \"glance-default-external-api-0\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.099389 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c176f1-e9ba-4313-a720-6316085bcb41-config-data\") pod \"glance-default-external-api-0\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.099417 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38c176f1-e9ba-4313-a720-6316085bcb41-logs\") pod \"glance-default-external-api-0\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.099441 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38c176f1-e9ba-4313-a720-6316085bcb41-scripts\") pod \"glance-default-external-api-0\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.118859 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-pgd4z"] Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.162164 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.178422 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.183539 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.197659 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.201792 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38c176f1-e9ba-4313-a720-6316085bcb41-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.201885 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c176f1-e9ba-4313-a720-6316085bcb41-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.201919 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\") pod \"glance-default-external-api-0\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.201967 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4g44\" (UniqueName: \"kubernetes.io/projected/38c176f1-e9ba-4313-a720-6316085bcb41-kube-api-access-s4g44\") pod \"glance-default-external-api-0\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.202062 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c176f1-e9ba-4313-a720-6316085bcb41-config-data\") pod \"glance-default-external-api-0\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.202101 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38c176f1-e9ba-4313-a720-6316085bcb41-logs\") pod \"glance-default-external-api-0\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.202125 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38c176f1-e9ba-4313-a720-6316085bcb41-scripts\") pod \"glance-default-external-api-0\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.207182 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38c176f1-e9ba-4313-a720-6316085bcb41-scripts\") pod \"glance-default-external-api-0\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.207753 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38c176f1-e9ba-4313-a720-6316085bcb41-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.212423 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38c176f1-e9ba-4313-a720-6316085bcb41-logs\") pod \"glance-default-external-api-0\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.221887 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c176f1-e9ba-4313-a720-6316085bcb41-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.222087 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c176f1-e9ba-4313-a720-6316085bcb41-config-data\") pod \"glance-default-external-api-0\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.223216 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.225119 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\") pod \"glance-default-external-api-0\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/70d288833f8e05bd5ab355a71e03a1d850821b8fc6c525467c163add739f4167/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.254151 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4g44\" (UniqueName: \"kubernetes.io/projected/38c176f1-e9ba-4313-a720-6316085bcb41-kube-api-access-s4g44\") pod \"glance-default-external-api-0\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.332125 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.332221 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-logs\") pod \"glance-default-internal-api-0\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.332275 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\") pod \"glance-default-internal-api-0\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.332528 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.332593 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq8cn\" (UniqueName: \"kubernetes.io/projected/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-kube-api-access-zq8cn\") pod \"glance-default-internal-api-0\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.332626 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.332644 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.381466 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\") pod \"glance-default-external-api-0\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.435690 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-bd9bf"] Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.437061 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\") pod \"glance-default-internal-api-0\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.437103 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.437151 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq8cn\" (UniqueName: \"kubernetes.io/projected/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-kube-api-access-zq8cn\") pod \"glance-default-internal-api-0\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.437175 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.437191 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.437257 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.437305 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-logs\") pod \"glance-default-internal-api-0\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.437738 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-logs\") pod \"glance-default-internal-api-0\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.438228 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.443479 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.445378 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.448433 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.448579 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\") pod \"glance-default-internal-api-0\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5803e2a900e6e36c291a83a7f7817a6f6801a9c863eb8ea67b62b877ff35bd26/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.450905 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.470127 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq8cn\" (UniqueName: \"kubernetes.io/projected/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-kube-api-access-zq8cn\") pod \"glance-default-internal-api-0\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.528827 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\") pod \"glance-default-internal-api-0\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.545609 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-4zw8v" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.548169 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-pjsmj" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.629788 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-4zw8v" event={"ID":"0520f98b-3e86-4430-992a-256e63137028","Type":"ContainerDied","Data":"c849f6a86202e92550a7cbc9570c52989d82e5322f138059702cb3cb2bb0bb6b"} Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.629814 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-4zw8v" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.630168 4858 scope.go:117] "RemoveContainer" containerID="0364e9c52a6f7f04b68b00fafa750219df0a63394a5d080e3c7bc6b3d456fd36" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.631851 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bd9bf" event={"ID":"373a01de-9360-4b5b-8f80-fdfc987dddae","Type":"ContainerStarted","Data":"9f1152ba3936d45352707004e5024feec692c3d2f14104350e32e0c46cca7924"} Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.634223 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-pjsmj" event={"ID":"00cb9158-ed0a-4e63-bef9-ac2fad834ccf","Type":"ContainerDied","Data":"e59c059c109f971e866475d6fe38122812ab273a5a2a549f176ec6757f61332c"} Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.634295 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-pjsmj" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.635560 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k8lfm" event={"ID":"76c29272-a1d7-4036-83f9-0907f311ca4d","Type":"ContainerStarted","Data":"c851f21e318745b4ac65d3a4f6e6f96ca9397e32fe50f27eb6b24cd796128fb6"} Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.635579 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k8lfm" event={"ID":"76c29272-a1d7-4036-83f9-0907f311ca4d","Type":"ContainerStarted","Data":"77ca35c2fd4266674f8f388b25583766714cde5a6f50989eeb3cf0a8659d7f4d"} Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.638577 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-pgd4z" event={"ID":"4eb53ef6-624b-4a11-90e6-85818e04c3bf","Type":"ContainerStarted","Data":"a68be7fd6937b207acccd70b7430b0f5a213a9bc52370de644c3a9fe3422b352"} Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.638633 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-pgd4z" event={"ID":"4eb53ef6-624b-4a11-90e6-85818e04c3bf","Type":"ContainerStarted","Data":"c4a0c7f28727cacf5079df0871b7df831b5da5a56e11b6afa53619daaf329090"} Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.639606 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp68s\" (UniqueName: \"kubernetes.io/projected/0520f98b-3e86-4430-992a-256e63137028-kube-api-access-wp68s\") pod \"0520f98b-3e86-4430-992a-256e63137028\" (UID: \"0520f98b-3e86-4430-992a-256e63137028\") " Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.639641 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-ovsdbserver-sb\") pod \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\" (UID: \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\") " Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.639678 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0520f98b-3e86-4430-992a-256e63137028-ovsdbserver-nb\") pod \"0520f98b-3e86-4430-992a-256e63137028\" (UID: \"0520f98b-3e86-4430-992a-256e63137028\") " Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.639718 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvkbc\" (UniqueName: \"kubernetes.io/projected/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-kube-api-access-lvkbc\") pod \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\" (UID: \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\") " Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.639752 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0520f98b-3e86-4430-992a-256e63137028-ovsdbserver-sb\") pod \"0520f98b-3e86-4430-992a-256e63137028\" (UID: \"0520f98b-3e86-4430-992a-256e63137028\") " Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.639790 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0520f98b-3e86-4430-992a-256e63137028-dns-svc\") pod \"0520f98b-3e86-4430-992a-256e63137028\" (UID: \"0520f98b-3e86-4430-992a-256e63137028\") " Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.639809 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-config\") pod \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\" (UID: \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\") " Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.639837 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-dns-swift-storage-0\") pod \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\" (UID: \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\") " Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.639898 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0520f98b-3e86-4430-992a-256e63137028-config\") pod \"0520f98b-3e86-4430-992a-256e63137028\" (UID: \"0520f98b-3e86-4430-992a-256e63137028\") " Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.640030 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-ovsdbserver-nb\") pod \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\" (UID: \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\") " Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.640074 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-dns-svc\") pod \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\" (UID: \"00cb9158-ed0a-4e63-bef9-ac2fad834ccf\") " Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.645709 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.652148 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-kube-api-access-lvkbc" (OuterVolumeSpecName: "kube-api-access-lvkbc") pod "00cb9158-ed0a-4e63-bef9-ac2fad834ccf" (UID: "00cb9158-ed0a-4e63-bef9-ac2fad834ccf"). InnerVolumeSpecName "kube-api-access-lvkbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.673651 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0520f98b-3e86-4430-992a-256e63137028-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0520f98b-3e86-4430-992a-256e63137028" (UID: "0520f98b-3e86-4430-992a-256e63137028"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.674624 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.675786 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-k8lfm" podStartSLOduration=2.675770606 podStartE2EDuration="2.675770606s" podCreationTimestamp="2026-02-18 00:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:52:46.674020344 +0000 UTC m=+1119.979857066" watchObservedRunningTime="2026-02-18 00:52:46.675770606 +0000 UTC m=+1119.981607338" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.686483 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0520f98b-3e86-4430-992a-256e63137028-config" (OuterVolumeSpecName: "config") pod "0520f98b-3e86-4430-992a-256e63137028" (UID: "0520f98b-3e86-4430-992a-256e63137028"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.689533 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0520f98b-3e86-4430-992a-256e63137028-kube-api-access-wp68s" (OuterVolumeSpecName: "kube-api-access-wp68s") pod "0520f98b-3e86-4430-992a-256e63137028" (UID: "0520f98b-3e86-4430-992a-256e63137028"). InnerVolumeSpecName "kube-api-access-wp68s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.695296 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "00cb9158-ed0a-4e63-bef9-ac2fad834ccf" (UID: "00cb9158-ed0a-4e63-bef9-ac2fad834ccf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.708281 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.719146 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "00cb9158-ed0a-4e63-bef9-ac2fad834ccf" (UID: "00cb9158-ed0a-4e63-bef9-ac2fad834ccf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.720218 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0520f98b-3e86-4430-992a-256e63137028-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0520f98b-3e86-4430-992a-256e63137028" (UID: "0520f98b-3e86-4430-992a-256e63137028"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.742031 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.742029 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "00cb9158-ed0a-4e63-bef9-ac2fad834ccf" (UID: "00cb9158-ed0a-4e63-bef9-ac2fad834ccf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.742062 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.742121 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp68s\" (UniqueName: \"kubernetes.io/projected/0520f98b-3e86-4430-992a-256e63137028-kube-api-access-wp68s\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.742158 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvkbc\" (UniqueName: \"kubernetes.io/projected/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-kube-api-access-lvkbc\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.742170 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0520f98b-3e86-4430-992a-256e63137028-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.742181 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0520f98b-3e86-4430-992a-256e63137028-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.742192 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0520f98b-3e86-4430-992a-256e63137028-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.760009 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "00cb9158-ed0a-4e63-bef9-ac2fad834ccf" (UID: "00cb9158-ed0a-4e63-bef9-ac2fad834ccf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.776559 4858 scope.go:117] "RemoveContainer" containerID="fb3ec7f26d0896712ff07ca262ea903058fc3315883fe0cadbcb60a71d80e1d7" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.787136 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0520f98b-3e86-4430-992a-256e63137028-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0520f98b-3e86-4430-992a-256e63137028" (UID: "0520f98b-3e86-4430-992a-256e63137028"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.800931 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-config" (OuterVolumeSpecName: "config") pod "00cb9158-ed0a-4e63-bef9-ac2fad834ccf" (UID: "00cb9158-ed0a-4e63-bef9-ac2fad834ccf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.809602 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-wrqgb"] Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.818546 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-x4wqp"] Feb 18 00:52:46 crc kubenswrapper[4858]: W0218 00:52:46.819033 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48a7b55c_92f4_41e7_b862_45eadd76013b.slice/crio-e5569a8c4230ba8ff5f002bf109c5a55b4488cb55384826672d543c1b5f15117 WatchSource:0}: Error finding container e5569a8c4230ba8ff5f002bf109c5a55b4488cb55384826672d543c1b5f15117: Status 404 returned error can't find the container with id e5569a8c4230ba8ff5f002bf109c5a55b4488cb55384826672d543c1b5f15117 Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.825949 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-bpmww"] Feb 18 00:52:46 crc kubenswrapper[4858]: W0218 00:52:46.836767 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf69b36cb_f694_4e90_b673_47681459414b.slice/crio-c3fe06be448b77fdb06f527c95e8f51312415c9e4eed381ce921e9ea0d5b28b4 WatchSource:0}: Error finding container c3fe06be448b77fdb06f527c95e8f51312415c9e4eed381ce921e9ea0d5b28b4: Status 404 returned error can't find the container with id c3fe06be448b77fdb06f527c95e8f51312415c9e4eed381ce921e9ea0d5b28b4 Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.843887 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.843930 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.843942 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00cb9158-ed0a-4e63-bef9-ac2fad834ccf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:46 crc kubenswrapper[4858]: I0218 00:52:46.843952 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0520f98b-3e86-4430-992a-256e63137028-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.038794 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-4vw4g"] Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.082742 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-4zw8v"] Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.098454 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-4zw8v"] Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.119572 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-k2wn6"] Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.159548 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-pjsmj"] Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.173084 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-pjsmj"] Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.248111 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-pgd4z" Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.358928 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-dns-swift-storage-0\") pod \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\" (UID: \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\") " Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.358993 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-config\") pod \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\" (UID: \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\") " Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.359069 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-ovsdbserver-sb\") pod \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\" (UID: \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\") " Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.359109 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-ovsdbserver-nb\") pod \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\" (UID: \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\") " Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.359144 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-dns-svc\") pod \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\" (UID: \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\") " Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.359180 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjchb\" (UniqueName: \"kubernetes.io/projected/4eb53ef6-624b-4a11-90e6-85818e04c3bf-kube-api-access-jjchb\") pod \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\" (UID: \"4eb53ef6-624b-4a11-90e6-85818e04c3bf\") " Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.373779 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eb53ef6-624b-4a11-90e6-85818e04c3bf-kube-api-access-jjchb" (OuterVolumeSpecName: "kube-api-access-jjchb") pod "4eb53ef6-624b-4a11-90e6-85818e04c3bf" (UID: "4eb53ef6-624b-4a11-90e6-85818e04c3bf"). InnerVolumeSpecName "kube-api-access-jjchb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.402258 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-config" (OuterVolumeSpecName: "config") pod "4eb53ef6-624b-4a11-90e6-85818e04c3bf" (UID: "4eb53ef6-624b-4a11-90e6-85818e04c3bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.402776 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4eb53ef6-624b-4a11-90e6-85818e04c3bf" (UID: "4eb53ef6-624b-4a11-90e6-85818e04c3bf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.406887 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4eb53ef6-624b-4a11-90e6-85818e04c3bf" (UID: "4eb53ef6-624b-4a11-90e6-85818e04c3bf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.417081 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4eb53ef6-624b-4a11-90e6-85818e04c3bf" (UID: "4eb53ef6-624b-4a11-90e6-85818e04c3bf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:47 crc kubenswrapper[4858]: W0218 00:52:47.436797 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38c176f1_e9ba_4313_a720_6316085bcb41.slice/crio-8b4f30678d6a797f4fa12d7a64aafee4cdb038fd806c8de0a03c934d21bf0c13 WatchSource:0}: Error finding container 8b4f30678d6a797f4fa12d7a64aafee4cdb038fd806c8de0a03c934d21bf0c13: Status 404 returned error can't find the container with id 8b4f30678d6a797f4fa12d7a64aafee4cdb038fd806c8de0a03c934d21bf0c13 Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.444754 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4eb53ef6-624b-4a11-90e6-85818e04c3bf" (UID: "4eb53ef6-624b-4a11-90e6-85818e04c3bf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.464714 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.464750 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.464762 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.464773 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.464784 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4eb53ef6-624b-4a11-90e6-85818e04c3bf-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.464794 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjchb\" (UniqueName: \"kubernetes.io/projected/4eb53ef6-624b-4a11-90e6-85818e04c3bf-kube-api-access-jjchb\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.516080 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00cb9158-ed0a-4e63-bef9-ac2fad834ccf" path="/var/lib/kubelet/pods/00cb9158-ed0a-4e63-bef9-ac2fad834ccf/volumes" Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.517359 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0520f98b-3e86-4430-992a-256e63137028" path="/var/lib/kubelet/pods/0520f98b-3e86-4430-992a-256e63137028/volumes" Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.526356 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.684528 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.693628 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"38c176f1-e9ba-4313-a720-6316085bcb41","Type":"ContainerStarted","Data":"8b4f30678d6a797f4fa12d7a64aafee4cdb038fd806c8de0a03c934d21bf0c13"} Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.705719 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bd9bf" event={"ID":"373a01de-9360-4b5b-8f80-fdfc987dddae","Type":"ContainerStarted","Data":"9a9370ae17661171b0d30c1240331f41a16ba3474aa5719ca1d76ce41b6d1466"} Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.712670 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4770fea7-6a1a-44e8-bebe-09220dbc4c71","Type":"ContainerStarted","Data":"4003f01bef0b8b7a8162f1ac6ddd466bfac47648b721c664b2e4bdc6b1b0d51f"} Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.715467 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-bpmww" event={"ID":"48a7b55c-92f4-41e7-b862-45eadd76013b","Type":"ContainerStarted","Data":"e5569a8c4230ba8ff5f002bf109c5a55b4488cb55384826672d543c1b5f15117"} Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.724741 4858 generic.go:334] "Generic (PLEG): container finished" podID="d6419d4c-77e6-41c9-bcbf-e2cc5043232c" containerID="6f80e1a6f7575f9484e5569059acac416fd5cc0fa571fec14f6b233ff423073e" exitCode=0 Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.724809 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" event={"ID":"d6419d4c-77e6-41c9-bcbf-e2cc5043232c","Type":"ContainerDied","Data":"6f80e1a6f7575f9484e5569059acac416fd5cc0fa571fec14f6b233ff423073e"} Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.724855 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" event={"ID":"d6419d4c-77e6-41c9-bcbf-e2cc5043232c","Type":"ContainerStarted","Data":"59e61a4ea9605a31d2efe55e806331cc260cfd3db87f1b4166cb1419a45f5881"} Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.727963 4858 generic.go:334] "Generic (PLEG): container finished" podID="4eb53ef6-624b-4a11-90e6-85818e04c3bf" containerID="a68be7fd6937b207acccd70b7430b0f5a213a9bc52370de644c3a9fe3422b352" exitCode=0 Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.728058 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-pgd4z" Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.728437 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-pgd4z" event={"ID":"4eb53ef6-624b-4a11-90e6-85818e04c3bf","Type":"ContainerDied","Data":"a68be7fd6937b207acccd70b7430b0f5a213a9bc52370de644c3a9fe3422b352"} Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.728529 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-pgd4z" event={"ID":"4eb53ef6-624b-4a11-90e6-85818e04c3bf","Type":"ContainerDied","Data":"c4a0c7f28727cacf5079df0871b7df831b5da5a56e11b6afa53619daaf329090"} Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.728554 4858 scope.go:117] "RemoveContainer" containerID="a68be7fd6937b207acccd70b7430b0f5a213a9bc52370de644c3a9fe3422b352" Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.735048 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-bd9bf" podStartSLOduration=2.735028977 podStartE2EDuration="2.735028977s" podCreationTimestamp="2026-02-18 00:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:52:47.71944347 +0000 UTC m=+1121.025280202" watchObservedRunningTime="2026-02-18 00:52:47.735028977 +0000 UTC m=+1121.040865709" Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.740156 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wrqgb" event={"ID":"f69b36cb-f694-4e90-b673-47681459414b","Type":"ContainerStarted","Data":"c3fe06be448b77fdb06f527c95e8f51312415c9e4eed381ce921e9ea0d5b28b4"} Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.751140 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x4wqp" event={"ID":"423548cb-6c87-4876-a08c-fd64805971ea","Type":"ContainerStarted","Data":"de074d2ac6cdf89afd0b1d14340f07d4ae8f8344974f6dc3f226ecdaa97e9aca"} Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.786211 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-k2wn6" event={"ID":"8b6ceabb-aac4-48fc-9d11-abbedea94d2d","Type":"ContainerStarted","Data":"ff5bbb02b35bc62b1f6ff28ebb7cc0fb68c2f64af24c3308c7143e88b98fd98b"} Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.803462 4858 scope.go:117] "RemoveContainer" containerID="a68be7fd6937b207acccd70b7430b0f5a213a9bc52370de644c3a9fe3422b352" Feb 18 00:52:47 crc kubenswrapper[4858]: E0218 00:52:47.803819 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a68be7fd6937b207acccd70b7430b0f5a213a9bc52370de644c3a9fe3422b352\": container with ID starting with a68be7fd6937b207acccd70b7430b0f5a213a9bc52370de644c3a9fe3422b352 not found: ID does not exist" containerID="a68be7fd6937b207acccd70b7430b0f5a213a9bc52370de644c3a9fe3422b352" Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.803851 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a68be7fd6937b207acccd70b7430b0f5a213a9bc52370de644c3a9fe3422b352"} err="failed to get container status \"a68be7fd6937b207acccd70b7430b0f5a213a9bc52370de644c3a9fe3422b352\": rpc error: code = NotFound desc = could not find container \"a68be7fd6937b207acccd70b7430b0f5a213a9bc52370de644c3a9fe3422b352\": container with ID starting with a68be7fd6937b207acccd70b7430b0f5a213a9bc52370de644c3a9fe3422b352 not found: ID does not exist" Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.824235 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-pgd4z"] Feb 18 00:52:47 crc kubenswrapper[4858]: I0218 00:52:47.835626 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-pgd4z"] Feb 18 00:52:48 crc kubenswrapper[4858]: I0218 00:52:48.055235 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 00:52:48 crc kubenswrapper[4858]: I0218 00:52:48.159821 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:52:48 crc kubenswrapper[4858]: I0218 00:52:48.181304 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 00:52:48 crc kubenswrapper[4858]: I0218 00:52:48.811101 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"38c176f1-e9ba-4313-a720-6316085bcb41","Type":"ContainerStarted","Data":"f0e17d81dca23f518aeb7a3a1aa77ef3e45415fe6a8b102f414ba37ba797e4b0"} Feb 18 00:52:48 crc kubenswrapper[4858]: I0218 00:52:48.820971 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50","Type":"ContainerStarted","Data":"8fe215b21dd2dbabe751022d21c983c2bad5c53360040ce837dd40190e93db16"} Feb 18 00:52:48 crc kubenswrapper[4858]: I0218 00:52:48.827236 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" event={"ID":"d6419d4c-77e6-41c9-bcbf-e2cc5043232c","Type":"ContainerStarted","Data":"0eeb0956f7ba5140aa721be8c77a35d5fa0090b45aa2bac45100e425213d1c32"} Feb 18 00:52:48 crc kubenswrapper[4858]: I0218 00:52:48.851758 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" podStartSLOduration=3.851740682 podStartE2EDuration="3.851740682s" podCreationTimestamp="2026-02-18 00:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:52:48.843885521 +0000 UTC m=+1122.149722253" watchObservedRunningTime="2026-02-18 00:52:48.851740682 +0000 UTC m=+1122.157577414" Feb 18 00:52:49 crc kubenswrapper[4858]: I0218 00:52:49.457882 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eb53ef6-624b-4a11-90e6-85818e04c3bf" path="/var/lib/kubelet/pods/4eb53ef6-624b-4a11-90e6-85818e04c3bf/volumes" Feb 18 00:52:49 crc kubenswrapper[4858]: I0218 00:52:49.872795 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50","Type":"ContainerStarted","Data":"3d8d7126b7f1876f2a24e6f00836df27e4d81fc2447550999a24305679a913af"} Feb 18 00:52:49 crc kubenswrapper[4858]: I0218 00:52:49.873082 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" Feb 18 00:52:50 crc kubenswrapper[4858]: I0218 00:52:50.884104 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50","Type":"ContainerStarted","Data":"83ff457350e9365eff27940cd7989be59e35029c89aa86757ab54679544f31b2"} Feb 18 00:52:50 crc kubenswrapper[4858]: I0218 00:52:50.884255 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50" containerName="glance-log" containerID="cri-o://3d8d7126b7f1876f2a24e6f00836df27e4d81fc2447550999a24305679a913af" gracePeriod=30 Feb 18 00:52:50 crc kubenswrapper[4858]: I0218 00:52:50.884511 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50" containerName="glance-httpd" containerID="cri-o://83ff457350e9365eff27940cd7989be59e35029c89aa86757ab54679544f31b2" gracePeriod=30 Feb 18 00:52:50 crc kubenswrapper[4858]: I0218 00:52:50.906530 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="38c176f1-e9ba-4313-a720-6316085bcb41" containerName="glance-log" containerID="cri-o://f0e17d81dca23f518aeb7a3a1aa77ef3e45415fe6a8b102f414ba37ba797e4b0" gracePeriod=30 Feb 18 00:52:50 crc kubenswrapper[4858]: I0218 00:52:50.907560 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"38c176f1-e9ba-4313-a720-6316085bcb41","Type":"ContainerStarted","Data":"d9e370d89cf0b6fc05ec14ededb552ebfa8bc1d42e14ad9173dcb4f2eb80461b"} Feb 18 00:52:50 crc kubenswrapper[4858]: I0218 00:52:50.907626 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="38c176f1-e9ba-4313-a720-6316085bcb41" containerName="glance-httpd" containerID="cri-o://d9e370d89cf0b6fc05ec14ededb552ebfa8bc1d42e14ad9173dcb4f2eb80461b" gracePeriod=30 Feb 18 00:52:50 crc kubenswrapper[4858]: I0218 00:52:50.910193 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.91018334 podStartE2EDuration="5.91018334s" podCreationTimestamp="2026-02-18 00:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:52:50.900890465 +0000 UTC m=+1124.206727207" watchObservedRunningTime="2026-02-18 00:52:50.91018334 +0000 UTC m=+1124.216020072" Feb 18 00:52:50 crc kubenswrapper[4858]: I0218 00:52:50.944752 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.944734547 podStartE2EDuration="6.944734547s" podCreationTimestamp="2026-02-18 00:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:52:50.927710285 +0000 UTC m=+1124.233547027" watchObservedRunningTime="2026-02-18 00:52:50.944734547 +0000 UTC m=+1124.250571279" Feb 18 00:52:51 crc kubenswrapper[4858]: I0218 00:52:51.918588 4858 generic.go:334] "Generic (PLEG): container finished" podID="76c29272-a1d7-4036-83f9-0907f311ca4d" containerID="c851f21e318745b4ac65d3a4f6e6f96ca9397e32fe50f27eb6b24cd796128fb6" exitCode=0 Feb 18 00:52:51 crc kubenswrapper[4858]: I0218 00:52:51.918702 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k8lfm" event={"ID":"76c29272-a1d7-4036-83f9-0907f311ca4d","Type":"ContainerDied","Data":"c851f21e318745b4ac65d3a4f6e6f96ca9397e32fe50f27eb6b24cd796128fb6"} Feb 18 00:52:51 crc kubenswrapper[4858]: I0218 00:52:51.927328 4858 generic.go:334] "Generic (PLEG): container finished" podID="38c176f1-e9ba-4313-a720-6316085bcb41" containerID="d9e370d89cf0b6fc05ec14ededb552ebfa8bc1d42e14ad9173dcb4f2eb80461b" exitCode=0 Feb 18 00:52:51 crc kubenswrapper[4858]: I0218 00:52:51.927352 4858 generic.go:334] "Generic (PLEG): container finished" podID="38c176f1-e9ba-4313-a720-6316085bcb41" containerID="f0e17d81dca23f518aeb7a3a1aa77ef3e45415fe6a8b102f414ba37ba797e4b0" exitCode=143 Feb 18 00:52:51 crc kubenswrapper[4858]: I0218 00:52:51.927409 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"38c176f1-e9ba-4313-a720-6316085bcb41","Type":"ContainerDied","Data":"d9e370d89cf0b6fc05ec14ededb552ebfa8bc1d42e14ad9173dcb4f2eb80461b"} Feb 18 00:52:51 crc kubenswrapper[4858]: I0218 00:52:51.927438 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"38c176f1-e9ba-4313-a720-6316085bcb41","Type":"ContainerDied","Data":"f0e17d81dca23f518aeb7a3a1aa77ef3e45415fe6a8b102f414ba37ba797e4b0"} Feb 18 00:52:51 crc kubenswrapper[4858]: I0218 00:52:51.935015 4858 generic.go:334] "Generic (PLEG): container finished" podID="cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50" containerID="83ff457350e9365eff27940cd7989be59e35029c89aa86757ab54679544f31b2" exitCode=0 Feb 18 00:52:51 crc kubenswrapper[4858]: I0218 00:52:51.935057 4858 generic.go:334] "Generic (PLEG): container finished" podID="cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50" containerID="3d8d7126b7f1876f2a24e6f00836df27e4d81fc2447550999a24305679a913af" exitCode=143 Feb 18 00:52:51 crc kubenswrapper[4858]: I0218 00:52:51.935079 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50","Type":"ContainerDied","Data":"83ff457350e9365eff27940cd7989be59e35029c89aa86757ab54679544f31b2"} Feb 18 00:52:51 crc kubenswrapper[4858]: I0218 00:52:51.935099 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50","Type":"ContainerDied","Data":"3d8d7126b7f1876f2a24e6f00836df27e4d81fc2447550999a24305679a913af"} Feb 18 00:52:55 crc kubenswrapper[4858]: I0218 00:52:55.265689 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:52:55 crc kubenswrapper[4858]: I0218 00:52:55.266350 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:52:55 crc kubenswrapper[4858]: I0218 00:52:55.897346 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" Feb 18 00:52:55 crc kubenswrapper[4858]: I0218 00:52:55.978136 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nzjqm"] Feb 18 00:52:55 crc kubenswrapper[4858]: I0218 00:52:55.978371 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-nzjqm" podUID="086d4d86-55ee-4c9b-b1c0-5cce4212d8e4" containerName="dnsmasq-dns" containerID="cri-o://db35b96b039b721993ad237b685efce5f9ac89543137f076523e4bb6de788a10" gracePeriod=10 Feb 18 00:52:56 crc kubenswrapper[4858]: I0218 00:52:56.577015 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-nzjqm" podUID="086d4d86-55ee-4c9b-b1c0-5cce4212d8e4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.131:5353: connect: connection refused" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.012260 4858 generic.go:334] "Generic (PLEG): container finished" podID="086d4d86-55ee-4c9b-b1c0-5cce4212d8e4" containerID="db35b96b039b721993ad237b685efce5f9ac89543137f076523e4bb6de788a10" exitCode=0 Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.012309 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nzjqm" event={"ID":"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4","Type":"ContainerDied","Data":"db35b96b039b721993ad237b685efce5f9ac89543137f076523e4bb6de788a10"} Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.138819 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.147928 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.149272 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k8lfm" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.247212 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38c176f1-e9ba-4313-a720-6316085bcb41-httpd-run\") pod \"38c176f1-e9ba-4313-a720-6316085bcb41\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.247268 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-fernet-keys\") pod \"76c29272-a1d7-4036-83f9-0907f311ca4d\" (UID: \"76c29272-a1d7-4036-83f9-0907f311ca4d\") " Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.247650 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\") pod \"38c176f1-e9ba-4313-a720-6316085bcb41\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.247687 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38c176f1-e9ba-4313-a720-6316085bcb41-logs\") pod \"38c176f1-e9ba-4313-a720-6316085bcb41\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.247706 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-scripts\") pod \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.247730 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-scripts\") pod \"76c29272-a1d7-4036-83f9-0907f311ca4d\" (UID: \"76c29272-a1d7-4036-83f9-0907f311ca4d\") " Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.247754 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c176f1-e9ba-4313-a720-6316085bcb41-combined-ca-bundle\") pod \"38c176f1-e9ba-4313-a720-6316085bcb41\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.247829 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38c176f1-e9ba-4313-a720-6316085bcb41-scripts\") pod \"38c176f1-e9ba-4313-a720-6316085bcb41\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.247842 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38c176f1-e9ba-4313-a720-6316085bcb41-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "38c176f1-e9ba-4313-a720-6316085bcb41" (UID: "38c176f1-e9ba-4313-a720-6316085bcb41"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.247868 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-combined-ca-bundle\") pod \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.247946 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-credential-keys\") pod \"76c29272-a1d7-4036-83f9-0907f311ca4d\" (UID: \"76c29272-a1d7-4036-83f9-0907f311ca4d\") " Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.247966 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tx4g\" (UniqueName: \"kubernetes.io/projected/76c29272-a1d7-4036-83f9-0907f311ca4d-kube-api-access-5tx4g\") pod \"76c29272-a1d7-4036-83f9-0907f311ca4d\" (UID: \"76c29272-a1d7-4036-83f9-0907f311ca4d\") " Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.248032 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-config-data\") pod \"76c29272-a1d7-4036-83f9-0907f311ca4d\" (UID: \"76c29272-a1d7-4036-83f9-0907f311ca4d\") " Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.248048 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c176f1-e9ba-4313-a720-6316085bcb41-config-data\") pod \"38c176f1-e9ba-4313-a720-6316085bcb41\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.248068 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-config-data\") pod \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.248096 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq8cn\" (UniqueName: \"kubernetes.io/projected/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-kube-api-access-zq8cn\") pod \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.248128 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-httpd-run\") pod \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.248174 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-logs\") pod \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.248248 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\") pod \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\" (UID: \"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50\") " Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.248285 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4g44\" (UniqueName: \"kubernetes.io/projected/38c176f1-e9ba-4313-a720-6316085bcb41-kube-api-access-s4g44\") pod \"38c176f1-e9ba-4313-a720-6316085bcb41\" (UID: \"38c176f1-e9ba-4313-a720-6316085bcb41\") " Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.248321 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-combined-ca-bundle\") pod \"76c29272-a1d7-4036-83f9-0907f311ca4d\" (UID: \"76c29272-a1d7-4036-83f9-0907f311ca4d\") " Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.249027 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38c176f1-e9ba-4313-a720-6316085bcb41-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.252182 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-logs" (OuterVolumeSpecName: "logs") pod "cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50" (UID: "cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.252379 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50" (UID: "cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.255639 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "76c29272-a1d7-4036-83f9-0907f311ca4d" (UID: "76c29272-a1d7-4036-83f9-0907f311ca4d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.255646 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38c176f1-e9ba-4313-a720-6316085bcb41-logs" (OuterVolumeSpecName: "logs") pod "38c176f1-e9ba-4313-a720-6316085bcb41" (UID: "38c176f1-e9ba-4313-a720-6316085bcb41"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.264607 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "76c29272-a1d7-4036-83f9-0907f311ca4d" (UID: "76c29272-a1d7-4036-83f9-0907f311ca4d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.267093 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76c29272-a1d7-4036-83f9-0907f311ca4d-kube-api-access-5tx4g" (OuterVolumeSpecName: "kube-api-access-5tx4g") pod "76c29272-a1d7-4036-83f9-0907f311ca4d" (UID: "76c29272-a1d7-4036-83f9-0907f311ca4d"). InnerVolumeSpecName "kube-api-access-5tx4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.267118 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-scripts" (OuterVolumeSpecName: "scripts") pod "cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50" (UID: "cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.267842 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38c176f1-e9ba-4313-a720-6316085bcb41-scripts" (OuterVolumeSpecName: "scripts") pod "38c176f1-e9ba-4313-a720-6316085bcb41" (UID: "38c176f1-e9ba-4313-a720-6316085bcb41"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.286196 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-kube-api-access-zq8cn" (OuterVolumeSpecName: "kube-api-access-zq8cn") pod "cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50" (UID: "cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50"). InnerVolumeSpecName "kube-api-access-zq8cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.298774 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38c176f1-e9ba-4313-a720-6316085bcb41-kube-api-access-s4g44" (OuterVolumeSpecName: "kube-api-access-s4g44") pod "38c176f1-e9ba-4313-a720-6316085bcb41" (UID: "38c176f1-e9ba-4313-a720-6316085bcb41"). InnerVolumeSpecName "kube-api-access-s4g44". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.299480 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2" (OuterVolumeSpecName: "glance") pod "cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50" (UID: "cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50"). InnerVolumeSpecName "pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.312393 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ced7117e-c471-49ff-8f11-c2333cc7f770" (OuterVolumeSpecName: "glance") pod "38c176f1-e9ba-4313-a720-6316085bcb41" (UID: "38c176f1-e9ba-4313-a720-6316085bcb41"). InnerVolumeSpecName "pvc-ced7117e-c471-49ff-8f11-c2333cc7f770". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.313544 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38c176f1-e9ba-4313-a720-6316085bcb41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38c176f1-e9ba-4313-a720-6316085bcb41" (UID: "38c176f1-e9ba-4313-a720-6316085bcb41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.315677 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-config-data" (OuterVolumeSpecName: "config-data") pod "76c29272-a1d7-4036-83f9-0907f311ca4d" (UID: "76c29272-a1d7-4036-83f9-0907f311ca4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.320461 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-scripts" (OuterVolumeSpecName: "scripts") pod "76c29272-a1d7-4036-83f9-0907f311ca4d" (UID: "76c29272-a1d7-4036-83f9-0907f311ca4d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.320676 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76c29272-a1d7-4036-83f9-0907f311ca4d" (UID: "76c29272-a1d7-4036-83f9-0907f311ca4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.335667 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50" (UID: "cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.338285 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-config-data" (OuterVolumeSpecName: "config-data") pod "cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50" (UID: "cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.350708 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.350757 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\") on node \"crc\" " Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.350772 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4g44\" (UniqueName: \"kubernetes.io/projected/38c176f1-e9ba-4313-a720-6316085bcb41-kube-api-access-s4g44\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.350786 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.350795 4858 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.350811 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\") on node \"crc\" " Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.350820 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38c176f1-e9ba-4313-a720-6316085bcb41-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.350828 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.350836 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.350843 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c176f1-e9ba-4313-a720-6316085bcb41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.350853 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38c176f1-e9ba-4313-a720-6316085bcb41-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.350861 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.350869 4858 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.350877 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tx4g\" (UniqueName: \"kubernetes.io/projected/76c29272-a1d7-4036-83f9-0907f311ca4d-kube-api-access-5tx4g\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.350885 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76c29272-a1d7-4036-83f9-0907f311ca4d-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.350893 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.350902 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq8cn\" (UniqueName: \"kubernetes.io/projected/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-kube-api-access-zq8cn\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.350910 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.367953 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38c176f1-e9ba-4313-a720-6316085bcb41-config-data" (OuterVolumeSpecName: "config-data") pod "38c176f1-e9ba-4313-a720-6316085bcb41" (UID: "38c176f1-e9ba-4313-a720-6316085bcb41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.377144 4858 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.377286 4858 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ced7117e-c471-49ff-8f11-c2333cc7f770" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ced7117e-c471-49ff-8f11-c2333cc7f770") on node "crc" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.381374 4858 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.381534 4858 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2") on node "crc" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.455410 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c176f1-e9ba-4313-a720-6316085bcb41-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.455755 4858 reconciler_common.go:293] "Volume detached for volume \"pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:57 crc kubenswrapper[4858]: I0218 00:52:57.455769 4858 reconciler_common.go:293] "Volume detached for volume \"pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.033420 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k8lfm" event={"ID":"76c29272-a1d7-4036-83f9-0907f311ca4d","Type":"ContainerDied","Data":"77ca35c2fd4266674f8f388b25583766714cde5a6f50989eeb3cf0a8659d7f4d"} Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.033460 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77ca35c2fd4266674f8f388b25583766714cde5a6f50989eeb3cf0a8659d7f4d" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.033535 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k8lfm" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.038144 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"38c176f1-e9ba-4313-a720-6316085bcb41","Type":"ContainerDied","Data":"8b4f30678d6a797f4fa12d7a64aafee4cdb038fd806c8de0a03c934d21bf0c13"} Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.038207 4858 scope.go:117] "RemoveContainer" containerID="d9e370d89cf0b6fc05ec14ededb552ebfa8bc1d42e14ad9173dcb4f2eb80461b" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.038227 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.044954 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50","Type":"ContainerDied","Data":"8fe215b21dd2dbabe751022d21c983c2bad5c53360040ce837dd40190e93db16"} Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.044972 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.103162 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.117232 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.141610 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.152480 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.159030 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 00:52:58 crc kubenswrapper[4858]: E0218 00:52:58.159386 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38c176f1-e9ba-4313-a720-6316085bcb41" containerName="glance-log" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.159406 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c176f1-e9ba-4313-a720-6316085bcb41" containerName="glance-log" Feb 18 00:52:58 crc kubenswrapper[4858]: E0218 00:52:58.159420 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50" containerName="glance-log" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.159427 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50" containerName="glance-log" Feb 18 00:52:58 crc kubenswrapper[4858]: E0218 00:52:58.159443 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38c176f1-e9ba-4313-a720-6316085bcb41" containerName="glance-httpd" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.159449 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c176f1-e9ba-4313-a720-6316085bcb41" containerName="glance-httpd" Feb 18 00:52:58 crc kubenswrapper[4858]: E0218 00:52:58.159464 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50" containerName="glance-httpd" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.159470 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50" containerName="glance-httpd" Feb 18 00:52:58 crc kubenswrapper[4858]: E0218 00:52:58.159487 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eb53ef6-624b-4a11-90e6-85818e04c3bf" containerName="init" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.159508 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eb53ef6-624b-4a11-90e6-85818e04c3bf" containerName="init" Feb 18 00:52:58 crc kubenswrapper[4858]: E0218 00:52:58.159527 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0520f98b-3e86-4430-992a-256e63137028" containerName="init" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.159534 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0520f98b-3e86-4430-992a-256e63137028" containerName="init" Feb 18 00:52:58 crc kubenswrapper[4858]: E0218 00:52:58.159543 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00cb9158-ed0a-4e63-bef9-ac2fad834ccf" containerName="init" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.159549 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="00cb9158-ed0a-4e63-bef9-ac2fad834ccf" containerName="init" Feb 18 00:52:58 crc kubenswrapper[4858]: E0218 00:52:58.159561 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76c29272-a1d7-4036-83f9-0907f311ca4d" containerName="keystone-bootstrap" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.159567 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="76c29272-a1d7-4036-83f9-0907f311ca4d" containerName="keystone-bootstrap" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.159753 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="00cb9158-ed0a-4e63-bef9-ac2fad834ccf" containerName="init" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.159766 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50" containerName="glance-httpd" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.159778 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0520f98b-3e86-4430-992a-256e63137028" containerName="init" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.159791 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="38c176f1-e9ba-4313-a720-6316085bcb41" containerName="glance-httpd" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.159802 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="76c29272-a1d7-4036-83f9-0907f311ca4d" containerName="keystone-bootstrap" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.159813 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eb53ef6-624b-4a11-90e6-85818e04c3bf" containerName="init" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.159827 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="38c176f1-e9ba-4313-a720-6316085bcb41" containerName="glance-log" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.159839 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50" containerName="glance-log" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.160880 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.165714 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mlt6s" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.165731 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.165899 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.165998 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.167766 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.186155 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.187879 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.195470 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.198812 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.199015 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.266418 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/263681ae-36ff-4a39-8e4c-1971633851ee-config-data\") pod \"glance-default-external-api-0\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.266476 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/263681ae-36ff-4a39-8e4c-1971633851ee-logs\") pod \"glance-default-external-api-0\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.266557 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/263681ae-36ff-4a39-8e4c-1971633851ee-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.266593 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/877dd3dc-e90c-4751-9650-17e13a905e75-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.266631 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/877dd3dc-e90c-4751-9650-17e13a905e75-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.266674 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/877dd3dc-e90c-4751-9650-17e13a905e75-config-data\") pod \"glance-default-internal-api-0\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.266699 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/263681ae-36ff-4a39-8e4c-1971633851ee-scripts\") pod \"glance-default-external-api-0\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.266722 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/877dd3dc-e90c-4751-9650-17e13a905e75-logs\") pod \"glance-default-internal-api-0\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.266761 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/877dd3dc-e90c-4751-9650-17e13a905e75-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.266794 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\") pod \"glance-default-external-api-0\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.266846 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2kd9\" (UniqueName: \"kubernetes.io/projected/877dd3dc-e90c-4751-9650-17e13a905e75-kube-api-access-w2kd9\") pod \"glance-default-internal-api-0\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.266882 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw99h\" (UniqueName: \"kubernetes.io/projected/263681ae-36ff-4a39-8e4c-1971633851ee-kube-api-access-bw99h\") pod \"glance-default-external-api-0\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.266903 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/263681ae-36ff-4a39-8e4c-1971633851ee-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.266921 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/263681ae-36ff-4a39-8e4c-1971633851ee-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.266938 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/877dd3dc-e90c-4751-9650-17e13a905e75-scripts\") pod \"glance-default-internal-api-0\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.266957 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\") pod \"glance-default-internal-api-0\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.343040 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-k8lfm"] Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.352170 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-k8lfm"] Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.371167 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw99h\" (UniqueName: \"kubernetes.io/projected/263681ae-36ff-4a39-8e4c-1971633851ee-kube-api-access-bw99h\") pod \"glance-default-external-api-0\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.371225 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/263681ae-36ff-4a39-8e4c-1971633851ee-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.371250 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/263681ae-36ff-4a39-8e4c-1971633851ee-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.371274 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/877dd3dc-e90c-4751-9650-17e13a905e75-scripts\") pod \"glance-default-internal-api-0\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.371303 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\") pod \"glance-default-internal-api-0\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.371346 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/263681ae-36ff-4a39-8e4c-1971633851ee-config-data\") pod \"glance-default-external-api-0\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.371379 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/263681ae-36ff-4a39-8e4c-1971633851ee-logs\") pod \"glance-default-external-api-0\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.371433 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/263681ae-36ff-4a39-8e4c-1971633851ee-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.371468 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/877dd3dc-e90c-4751-9650-17e13a905e75-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.371514 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/877dd3dc-e90c-4751-9650-17e13a905e75-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.371551 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/877dd3dc-e90c-4751-9650-17e13a905e75-config-data\") pod \"glance-default-internal-api-0\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.371573 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/263681ae-36ff-4a39-8e4c-1971633851ee-scripts\") pod \"glance-default-external-api-0\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.372949 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/877dd3dc-e90c-4751-9650-17e13a905e75-logs\") pod \"glance-default-internal-api-0\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.375076 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/877dd3dc-e90c-4751-9650-17e13a905e75-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.375128 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\") pod \"glance-default-external-api-0\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.375904 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2kd9\" (UniqueName: \"kubernetes.io/projected/877dd3dc-e90c-4751-9650-17e13a905e75-kube-api-access-w2kd9\") pod \"glance-default-internal-api-0\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.376719 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/263681ae-36ff-4a39-8e4c-1971633851ee-logs\") pod \"glance-default-external-api-0\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.381950 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/877dd3dc-e90c-4751-9650-17e13a905e75-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.382780 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/877dd3dc-e90c-4751-9650-17e13a905e75-scripts\") pod \"glance-default-internal-api-0\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.383656 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/877dd3dc-e90c-4751-9650-17e13a905e75-logs\") pod \"glance-default-internal-api-0\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.385752 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/877dd3dc-e90c-4751-9650-17e13a905e75-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.386159 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/263681ae-36ff-4a39-8e4c-1971633851ee-scripts\") pod \"glance-default-external-api-0\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.386638 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/263681ae-36ff-4a39-8e4c-1971633851ee-config-data\") pod \"glance-default-external-api-0\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.388868 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.388908 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\") pod \"glance-default-internal-api-0\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5803e2a900e6e36c291a83a7f7817a6f6801a9c863eb8ea67b62b877ff35bd26/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.388971 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.388996 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\") pod \"glance-default-external-api-0\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/70d288833f8e05bd5ab355a71e03a1d850821b8fc6c525467c163add739f4167/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.389341 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/877dd3dc-e90c-4751-9650-17e13a905e75-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.392126 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/263681ae-36ff-4a39-8e4c-1971633851ee-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.392908 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/263681ae-36ff-4a39-8e4c-1971633851ee-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.397551 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/263681ae-36ff-4a39-8e4c-1971633851ee-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.398627 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2kd9\" (UniqueName: \"kubernetes.io/projected/877dd3dc-e90c-4751-9650-17e13a905e75-kube-api-access-w2kd9\") pod \"glance-default-internal-api-0\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.399522 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/877dd3dc-e90c-4751-9650-17e13a905e75-config-data\") pod \"glance-default-internal-api-0\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.403536 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw99h\" (UniqueName: \"kubernetes.io/projected/263681ae-36ff-4a39-8e4c-1971633851ee-kube-api-access-bw99h\") pod \"glance-default-external-api-0\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.457155 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2cphn"] Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.458419 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2cphn" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.462652 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x4lrd" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.462886 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.462979 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.463134 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.465023 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\") pod \"glance-default-internal-api-0\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.477867 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\") pod \"glance-default-external-api-0\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.478682 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2cphn"] Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.505591 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.533782 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.578747 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-combined-ca-bundle\") pod \"keystone-bootstrap-2cphn\" (UID: \"27254f13-cc74-43cf-9b54-08d87277de31\") " pod="openstack/keystone-bootstrap-2cphn" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.578817 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-fernet-keys\") pod \"keystone-bootstrap-2cphn\" (UID: \"27254f13-cc74-43cf-9b54-08d87277de31\") " pod="openstack/keystone-bootstrap-2cphn" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.579115 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-config-data\") pod \"keystone-bootstrap-2cphn\" (UID: \"27254f13-cc74-43cf-9b54-08d87277de31\") " pod="openstack/keystone-bootstrap-2cphn" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.579188 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-scripts\") pod \"keystone-bootstrap-2cphn\" (UID: \"27254f13-cc74-43cf-9b54-08d87277de31\") " pod="openstack/keystone-bootstrap-2cphn" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.579331 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g95v\" (UniqueName: \"kubernetes.io/projected/27254f13-cc74-43cf-9b54-08d87277de31-kube-api-access-4g95v\") pod \"keystone-bootstrap-2cphn\" (UID: \"27254f13-cc74-43cf-9b54-08d87277de31\") " pod="openstack/keystone-bootstrap-2cphn" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.579355 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-credential-keys\") pod \"keystone-bootstrap-2cphn\" (UID: \"27254f13-cc74-43cf-9b54-08d87277de31\") " pod="openstack/keystone-bootstrap-2cphn" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.680659 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-combined-ca-bundle\") pod \"keystone-bootstrap-2cphn\" (UID: \"27254f13-cc74-43cf-9b54-08d87277de31\") " pod="openstack/keystone-bootstrap-2cphn" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.680733 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-fernet-keys\") pod \"keystone-bootstrap-2cphn\" (UID: \"27254f13-cc74-43cf-9b54-08d87277de31\") " pod="openstack/keystone-bootstrap-2cphn" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.680827 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-config-data\") pod \"keystone-bootstrap-2cphn\" (UID: \"27254f13-cc74-43cf-9b54-08d87277de31\") " pod="openstack/keystone-bootstrap-2cphn" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.680875 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-scripts\") pod \"keystone-bootstrap-2cphn\" (UID: \"27254f13-cc74-43cf-9b54-08d87277de31\") " pod="openstack/keystone-bootstrap-2cphn" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.680967 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g95v\" (UniqueName: \"kubernetes.io/projected/27254f13-cc74-43cf-9b54-08d87277de31-kube-api-access-4g95v\") pod \"keystone-bootstrap-2cphn\" (UID: \"27254f13-cc74-43cf-9b54-08d87277de31\") " pod="openstack/keystone-bootstrap-2cphn" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.681007 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-credential-keys\") pod \"keystone-bootstrap-2cphn\" (UID: \"27254f13-cc74-43cf-9b54-08d87277de31\") " pod="openstack/keystone-bootstrap-2cphn" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.684737 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-scripts\") pod \"keystone-bootstrap-2cphn\" (UID: \"27254f13-cc74-43cf-9b54-08d87277de31\") " pod="openstack/keystone-bootstrap-2cphn" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.685369 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-config-data\") pod \"keystone-bootstrap-2cphn\" (UID: \"27254f13-cc74-43cf-9b54-08d87277de31\") " pod="openstack/keystone-bootstrap-2cphn" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.686517 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-credential-keys\") pod \"keystone-bootstrap-2cphn\" (UID: \"27254f13-cc74-43cf-9b54-08d87277de31\") " pod="openstack/keystone-bootstrap-2cphn" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.686576 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-combined-ca-bundle\") pod \"keystone-bootstrap-2cphn\" (UID: \"27254f13-cc74-43cf-9b54-08d87277de31\") " pod="openstack/keystone-bootstrap-2cphn" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.688026 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-fernet-keys\") pod \"keystone-bootstrap-2cphn\" (UID: \"27254f13-cc74-43cf-9b54-08d87277de31\") " pod="openstack/keystone-bootstrap-2cphn" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.697190 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g95v\" (UniqueName: \"kubernetes.io/projected/27254f13-cc74-43cf-9b54-08d87277de31-kube-api-access-4g95v\") pod \"keystone-bootstrap-2cphn\" (UID: \"27254f13-cc74-43cf-9b54-08d87277de31\") " pod="openstack/keystone-bootstrap-2cphn" Feb 18 00:52:58 crc kubenswrapper[4858]: I0218 00:52:58.816012 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2cphn" Feb 18 00:52:59 crc kubenswrapper[4858]: I0218 00:52:59.431841 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38c176f1-e9ba-4313-a720-6316085bcb41" path="/var/lib/kubelet/pods/38c176f1-e9ba-4313-a720-6316085bcb41/volumes" Feb 18 00:52:59 crc kubenswrapper[4858]: I0218 00:52:59.433334 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76c29272-a1d7-4036-83f9-0907f311ca4d" path="/var/lib/kubelet/pods/76c29272-a1d7-4036-83f9-0907f311ca4d/volumes" Feb 18 00:52:59 crc kubenswrapper[4858]: I0218 00:52:59.433979 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50" path="/var/lib/kubelet/pods/cb0f3e6e-1d4e-4cb7-a3ce-5ba44cd58e50/volumes" Feb 18 00:53:01 crc kubenswrapper[4858]: I0218 00:53:01.578014 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-nzjqm" podUID="086d4d86-55ee-4c9b-b1c0-5cce4212d8e4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.131:5353: connect: connection refused" Feb 18 00:53:02 crc kubenswrapper[4858]: E0218 00:53:02.629679 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 18 00:53:02 crc kubenswrapper[4858]: E0218 00:53:02.630268 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4xbk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-x4wqp_openstack(423548cb-6c87-4876-a08c-fd64805971ea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 00:53:02 crc kubenswrapper[4858]: E0218 00:53:02.631556 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-x4wqp" podUID="423548cb-6c87-4876-a08c-fd64805971ea" Feb 18 00:53:03 crc kubenswrapper[4858]: E0218 00:53:03.099977 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-x4wqp" podUID="423548cb-6c87-4876-a08c-fd64805971ea" Feb 18 00:53:06 crc kubenswrapper[4858]: I0218 00:53:06.125121 4858 generic.go:334] "Generic (PLEG): container finished" podID="373a01de-9360-4b5b-8f80-fdfc987dddae" containerID="9a9370ae17661171b0d30c1240331f41a16ba3474aa5719ca1d76ce41b6d1466" exitCode=0 Feb 18 00:53:06 crc kubenswrapper[4858]: I0218 00:53:06.125297 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bd9bf" event={"ID":"373a01de-9360-4b5b-8f80-fdfc987dddae","Type":"ContainerDied","Data":"9a9370ae17661171b0d30c1240331f41a16ba3474aa5719ca1d76ce41b6d1466"} Feb 18 00:53:06 crc kubenswrapper[4858]: I0218 00:53:06.576881 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-nzjqm" podUID="086d4d86-55ee-4c9b-b1c0-5cce4212d8e4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.131:5353: connect: connection refused" Feb 18 00:53:06 crc kubenswrapper[4858]: I0218 00:53:06.577012 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-nzjqm" Feb 18 00:53:16 crc kubenswrapper[4858]: I0218 00:53:16.144129 4858 scope.go:117] "RemoveContainer" containerID="f0e17d81dca23f518aeb7a3a1aa77ef3e45415fe6a8b102f414ba37ba797e4b0" Feb 18 00:53:16 crc kubenswrapper[4858]: I0218 00:53:16.242109 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bd9bf" event={"ID":"373a01de-9360-4b5b-8f80-fdfc987dddae","Type":"ContainerDied","Data":"9f1152ba3936d45352707004e5024feec692c3d2f14104350e32e0c46cca7924"} Feb 18 00:53:16 crc kubenswrapper[4858]: I0218 00:53:16.242151 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f1152ba3936d45352707004e5024feec692c3d2f14104350e32e0c46cca7924" Feb 18 00:53:16 crc kubenswrapper[4858]: I0218 00:53:16.284152 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bd9bf" Feb 18 00:53:16 crc kubenswrapper[4858]: I0218 00:53:16.355565 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/373a01de-9360-4b5b-8f80-fdfc987dddae-combined-ca-bundle\") pod \"373a01de-9360-4b5b-8f80-fdfc987dddae\" (UID: \"373a01de-9360-4b5b-8f80-fdfc987dddae\") " Feb 18 00:53:16 crc kubenswrapper[4858]: I0218 00:53:16.356108 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lxm7\" (UniqueName: \"kubernetes.io/projected/373a01de-9360-4b5b-8f80-fdfc987dddae-kube-api-access-9lxm7\") pod \"373a01de-9360-4b5b-8f80-fdfc987dddae\" (UID: \"373a01de-9360-4b5b-8f80-fdfc987dddae\") " Feb 18 00:53:16 crc kubenswrapper[4858]: I0218 00:53:16.356347 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/373a01de-9360-4b5b-8f80-fdfc987dddae-config\") pod \"373a01de-9360-4b5b-8f80-fdfc987dddae\" (UID: \"373a01de-9360-4b5b-8f80-fdfc987dddae\") " Feb 18 00:53:16 crc kubenswrapper[4858]: I0218 00:53:16.383092 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/373a01de-9360-4b5b-8f80-fdfc987dddae-kube-api-access-9lxm7" (OuterVolumeSpecName: "kube-api-access-9lxm7") pod "373a01de-9360-4b5b-8f80-fdfc987dddae" (UID: "373a01de-9360-4b5b-8f80-fdfc987dddae"). InnerVolumeSpecName "kube-api-access-9lxm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:53:16 crc kubenswrapper[4858]: I0218 00:53:16.388692 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/373a01de-9360-4b5b-8f80-fdfc987dddae-config" (OuterVolumeSpecName: "config") pod "373a01de-9360-4b5b-8f80-fdfc987dddae" (UID: "373a01de-9360-4b5b-8f80-fdfc987dddae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:16 crc kubenswrapper[4858]: I0218 00:53:16.388827 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/373a01de-9360-4b5b-8f80-fdfc987dddae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "373a01de-9360-4b5b-8f80-fdfc987dddae" (UID: "373a01de-9360-4b5b-8f80-fdfc987dddae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:16 crc kubenswrapper[4858]: I0218 00:53:16.458565 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/373a01de-9360-4b5b-8f80-fdfc987dddae-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:16 crc kubenswrapper[4858]: I0218 00:53:16.458605 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/373a01de-9360-4b5b-8f80-fdfc987dddae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:16 crc kubenswrapper[4858]: I0218 00:53:16.458623 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lxm7\" (UniqueName: \"kubernetes.io/projected/373a01de-9360-4b5b-8f80-fdfc987dddae-kube-api-access-9lxm7\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:16 crc kubenswrapper[4858]: I0218 00:53:16.576876 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-nzjqm" podUID="086d4d86-55ee-4c9b-b1c0-5cce4212d8e4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.131:5353: i/o timeout" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.253983 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bd9bf" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.552234 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-bw4lt"] Feb 18 00:53:17 crc kubenswrapper[4858]: E0218 00:53:17.553083 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="373a01de-9360-4b5b-8f80-fdfc987dddae" containerName="neutron-db-sync" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.553101 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="373a01de-9360-4b5b-8f80-fdfc987dddae" containerName="neutron-db-sync" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.553342 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="373a01de-9360-4b5b-8f80-fdfc987dddae" containerName="neutron-db-sync" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.556094 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.587008 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-bw4lt"] Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.642117 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5fcf66f4c6-vkspn"] Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.645893 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fcf66f4c6-vkspn" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.649285 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-5bqrz" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.649818 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.650163 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.650466 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.666640 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5fcf66f4c6-vkspn"] Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.686409 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-dns-svc\") pod \"dnsmasq-dns-55f844cf75-bw4lt\" (UID: \"1c2489a7-5053-411a-9df6-8d6a659a36e2\") " pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.686716 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-bw4lt\" (UID: \"1c2489a7-5053-411a-9df6-8d6a659a36e2\") " pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.686820 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qljdd\" (UniqueName: \"kubernetes.io/projected/1c2489a7-5053-411a-9df6-8d6a659a36e2-kube-api-access-qljdd\") pod \"dnsmasq-dns-55f844cf75-bw4lt\" (UID: \"1c2489a7-5053-411a-9df6-8d6a659a36e2\") " pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.686913 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-bw4lt\" (UID: \"1c2489a7-5053-411a-9df6-8d6a659a36e2\") " pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.687040 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-config\") pod \"dnsmasq-dns-55f844cf75-bw4lt\" (UID: \"1c2489a7-5053-411a-9df6-8d6a659a36e2\") " pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.687143 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-bw4lt\" (UID: \"1c2489a7-5053-411a-9df6-8d6a659a36e2\") " pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.789153 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-dns-svc\") pod \"dnsmasq-dns-55f844cf75-bw4lt\" (UID: \"1c2489a7-5053-411a-9df6-8d6a659a36e2\") " pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.789229 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-combined-ca-bundle\") pod \"neutron-5fcf66f4c6-vkspn\" (UID: \"d2ca2386-8f22-41a1-87ad-7b9ff91754ef\") " pod="openstack/neutron-5fcf66f4c6-vkspn" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.789266 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-httpd-config\") pod \"neutron-5fcf66f4c6-vkspn\" (UID: \"d2ca2386-8f22-41a1-87ad-7b9ff91754ef\") " pod="openstack/neutron-5fcf66f4c6-vkspn" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.789290 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-bw4lt\" (UID: \"1c2489a7-5053-411a-9df6-8d6a659a36e2\") " pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.789326 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qljdd\" (UniqueName: \"kubernetes.io/projected/1c2489a7-5053-411a-9df6-8d6a659a36e2-kube-api-access-qljdd\") pod \"dnsmasq-dns-55f844cf75-bw4lt\" (UID: \"1c2489a7-5053-411a-9df6-8d6a659a36e2\") " pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.789344 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-bw4lt\" (UID: \"1c2489a7-5053-411a-9df6-8d6a659a36e2\") " pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.789364 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-ovndb-tls-certs\") pod \"neutron-5fcf66f4c6-vkspn\" (UID: \"d2ca2386-8f22-41a1-87ad-7b9ff91754ef\") " pod="openstack/neutron-5fcf66f4c6-vkspn" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.789390 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-config\") pod \"dnsmasq-dns-55f844cf75-bw4lt\" (UID: \"1c2489a7-5053-411a-9df6-8d6a659a36e2\") " pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.789413 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-bw4lt\" (UID: \"1c2489a7-5053-411a-9df6-8d6a659a36e2\") " pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.789437 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b9xp\" (UniqueName: \"kubernetes.io/projected/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-kube-api-access-7b9xp\") pod \"neutron-5fcf66f4c6-vkspn\" (UID: \"d2ca2386-8f22-41a1-87ad-7b9ff91754ef\") " pod="openstack/neutron-5fcf66f4c6-vkspn" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.789466 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-config\") pod \"neutron-5fcf66f4c6-vkspn\" (UID: \"d2ca2386-8f22-41a1-87ad-7b9ff91754ef\") " pod="openstack/neutron-5fcf66f4c6-vkspn" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.790077 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-dns-svc\") pod \"dnsmasq-dns-55f844cf75-bw4lt\" (UID: \"1c2489a7-5053-411a-9df6-8d6a659a36e2\") " pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.790990 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-bw4lt\" (UID: \"1c2489a7-5053-411a-9df6-8d6a659a36e2\") " pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.791135 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-bw4lt\" (UID: \"1c2489a7-5053-411a-9df6-8d6a659a36e2\") " pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.791569 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-bw4lt\" (UID: \"1c2489a7-5053-411a-9df6-8d6a659a36e2\") " pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.792692 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-config\") pod \"dnsmasq-dns-55f844cf75-bw4lt\" (UID: \"1c2489a7-5053-411a-9df6-8d6a659a36e2\") " pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.825639 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qljdd\" (UniqueName: \"kubernetes.io/projected/1c2489a7-5053-411a-9df6-8d6a659a36e2-kube-api-access-qljdd\") pod \"dnsmasq-dns-55f844cf75-bw4lt\" (UID: \"1c2489a7-5053-411a-9df6-8d6a659a36e2\") " pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.892100 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-combined-ca-bundle\") pod \"neutron-5fcf66f4c6-vkspn\" (UID: \"d2ca2386-8f22-41a1-87ad-7b9ff91754ef\") " pod="openstack/neutron-5fcf66f4c6-vkspn" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.892395 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-httpd-config\") pod \"neutron-5fcf66f4c6-vkspn\" (UID: \"d2ca2386-8f22-41a1-87ad-7b9ff91754ef\") " pod="openstack/neutron-5fcf66f4c6-vkspn" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.892531 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-ovndb-tls-certs\") pod \"neutron-5fcf66f4c6-vkspn\" (UID: \"d2ca2386-8f22-41a1-87ad-7b9ff91754ef\") " pod="openstack/neutron-5fcf66f4c6-vkspn" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.892675 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b9xp\" (UniqueName: \"kubernetes.io/projected/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-kube-api-access-7b9xp\") pod \"neutron-5fcf66f4c6-vkspn\" (UID: \"d2ca2386-8f22-41a1-87ad-7b9ff91754ef\") " pod="openstack/neutron-5fcf66f4c6-vkspn" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.892773 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-config\") pod \"neutron-5fcf66f4c6-vkspn\" (UID: \"d2ca2386-8f22-41a1-87ad-7b9ff91754ef\") " pod="openstack/neutron-5fcf66f4c6-vkspn" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.896561 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-httpd-config\") pod \"neutron-5fcf66f4c6-vkspn\" (UID: \"d2ca2386-8f22-41a1-87ad-7b9ff91754ef\") " pod="openstack/neutron-5fcf66f4c6-vkspn" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.899021 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-ovndb-tls-certs\") pod \"neutron-5fcf66f4c6-vkspn\" (UID: \"d2ca2386-8f22-41a1-87ad-7b9ff91754ef\") " pod="openstack/neutron-5fcf66f4c6-vkspn" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.900037 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-combined-ca-bundle\") pod \"neutron-5fcf66f4c6-vkspn\" (UID: \"d2ca2386-8f22-41a1-87ad-7b9ff91754ef\") " pod="openstack/neutron-5fcf66f4c6-vkspn" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.900532 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.912763 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b9xp\" (UniqueName: \"kubernetes.io/projected/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-kube-api-access-7b9xp\") pod \"neutron-5fcf66f4c6-vkspn\" (UID: \"d2ca2386-8f22-41a1-87ad-7b9ff91754ef\") " pod="openstack/neutron-5fcf66f4c6-vkspn" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.918792 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-config\") pod \"neutron-5fcf66f4c6-vkspn\" (UID: \"d2ca2386-8f22-41a1-87ad-7b9ff91754ef\") " pod="openstack/neutron-5fcf66f4c6-vkspn" Feb 18 00:53:17 crc kubenswrapper[4858]: I0218 00:53:17.972209 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fcf66f4c6-vkspn" Feb 18 00:53:19 crc kubenswrapper[4858]: I0218 00:53:19.663411 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6f78847c8f-hz7xt"] Feb 18 00:53:19 crc kubenswrapper[4858]: I0218 00:53:19.665403 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f78847c8f-hz7xt" Feb 18 00:53:19 crc kubenswrapper[4858]: I0218 00:53:19.670615 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 18 00:53:19 crc kubenswrapper[4858]: I0218 00:53:19.670615 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 18 00:53:19 crc kubenswrapper[4858]: I0218 00:53:19.683966 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f78847c8f-hz7xt"] Feb 18 00:53:19 crc kubenswrapper[4858]: I0218 00:53:19.730758 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-internal-tls-certs\") pod \"neutron-6f78847c8f-hz7xt\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " pod="openstack/neutron-6f78847c8f-hz7xt" Feb 18 00:53:19 crc kubenswrapper[4858]: I0218 00:53:19.730847 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-combined-ca-bundle\") pod \"neutron-6f78847c8f-hz7xt\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " pod="openstack/neutron-6f78847c8f-hz7xt" Feb 18 00:53:19 crc kubenswrapper[4858]: I0218 00:53:19.730931 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-config\") pod \"neutron-6f78847c8f-hz7xt\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " pod="openstack/neutron-6f78847c8f-hz7xt" Feb 18 00:53:19 crc kubenswrapper[4858]: I0218 00:53:19.730998 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfgvb\" (UniqueName: \"kubernetes.io/projected/f15c5e19-1645-4791-8981-2216c5be654b-kube-api-access-kfgvb\") pod \"neutron-6f78847c8f-hz7xt\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " pod="openstack/neutron-6f78847c8f-hz7xt" Feb 18 00:53:19 crc kubenswrapper[4858]: I0218 00:53:19.731065 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-httpd-config\") pod \"neutron-6f78847c8f-hz7xt\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " pod="openstack/neutron-6f78847c8f-hz7xt" Feb 18 00:53:19 crc kubenswrapper[4858]: I0218 00:53:19.731133 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-public-tls-certs\") pod \"neutron-6f78847c8f-hz7xt\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " pod="openstack/neutron-6f78847c8f-hz7xt" Feb 18 00:53:19 crc kubenswrapper[4858]: I0218 00:53:19.731157 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-ovndb-tls-certs\") pod \"neutron-6f78847c8f-hz7xt\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " pod="openstack/neutron-6f78847c8f-hz7xt" Feb 18 00:53:19 crc kubenswrapper[4858]: I0218 00:53:19.834470 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-httpd-config\") pod \"neutron-6f78847c8f-hz7xt\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " pod="openstack/neutron-6f78847c8f-hz7xt" Feb 18 00:53:19 crc kubenswrapper[4858]: I0218 00:53:19.834588 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-public-tls-certs\") pod \"neutron-6f78847c8f-hz7xt\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " pod="openstack/neutron-6f78847c8f-hz7xt" Feb 18 00:53:19 crc kubenswrapper[4858]: I0218 00:53:19.834619 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-ovndb-tls-certs\") pod \"neutron-6f78847c8f-hz7xt\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " pod="openstack/neutron-6f78847c8f-hz7xt" Feb 18 00:53:19 crc kubenswrapper[4858]: I0218 00:53:19.834717 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-internal-tls-certs\") pod \"neutron-6f78847c8f-hz7xt\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " pod="openstack/neutron-6f78847c8f-hz7xt" Feb 18 00:53:19 crc kubenswrapper[4858]: I0218 00:53:19.834779 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-combined-ca-bundle\") pod \"neutron-6f78847c8f-hz7xt\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " pod="openstack/neutron-6f78847c8f-hz7xt" Feb 18 00:53:19 crc kubenswrapper[4858]: I0218 00:53:19.834840 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-config\") pod \"neutron-6f78847c8f-hz7xt\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " pod="openstack/neutron-6f78847c8f-hz7xt" Feb 18 00:53:19 crc kubenswrapper[4858]: I0218 00:53:19.834913 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfgvb\" (UniqueName: \"kubernetes.io/projected/f15c5e19-1645-4791-8981-2216c5be654b-kube-api-access-kfgvb\") pod \"neutron-6f78847c8f-hz7xt\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " pod="openstack/neutron-6f78847c8f-hz7xt" Feb 18 00:53:19 crc kubenswrapper[4858]: I0218 00:53:19.839036 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-httpd-config\") pod \"neutron-6f78847c8f-hz7xt\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " pod="openstack/neutron-6f78847c8f-hz7xt" Feb 18 00:53:19 crc kubenswrapper[4858]: I0218 00:53:19.842054 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-ovndb-tls-certs\") pod \"neutron-6f78847c8f-hz7xt\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " pod="openstack/neutron-6f78847c8f-hz7xt" Feb 18 00:53:19 crc kubenswrapper[4858]: I0218 00:53:19.844373 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-internal-tls-certs\") pod \"neutron-6f78847c8f-hz7xt\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " pod="openstack/neutron-6f78847c8f-hz7xt" Feb 18 00:53:19 crc kubenswrapper[4858]: I0218 00:53:19.849780 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-public-tls-certs\") pod \"neutron-6f78847c8f-hz7xt\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " pod="openstack/neutron-6f78847c8f-hz7xt" Feb 18 00:53:19 crc kubenswrapper[4858]: I0218 00:53:19.855836 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-config\") pod \"neutron-6f78847c8f-hz7xt\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " pod="openstack/neutron-6f78847c8f-hz7xt" Feb 18 00:53:19 crc kubenswrapper[4858]: I0218 00:53:19.870835 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfgvb\" (UniqueName: \"kubernetes.io/projected/f15c5e19-1645-4791-8981-2216c5be654b-kube-api-access-kfgvb\") pod \"neutron-6f78847c8f-hz7xt\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " pod="openstack/neutron-6f78847c8f-hz7xt" Feb 18 00:53:19 crc kubenswrapper[4858]: I0218 00:53:19.872000 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-combined-ca-bundle\") pod \"neutron-6f78847c8f-hz7xt\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " pod="openstack/neutron-6f78847c8f-hz7xt" Feb 18 00:53:20 crc kubenswrapper[4858]: I0218 00:53:20.031553 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f78847c8f-hz7xt" Feb 18 00:53:21 crc kubenswrapper[4858]: E0218 00:53:21.494400 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 18 00:53:21 crc kubenswrapper[4858]: E0218 00:53:21.495096 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dv8mb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-wrqgb_openstack(f69b36cb-f694-4e90-b673-47681459414b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 00:53:21 crc kubenswrapper[4858]: E0218 00:53:21.496470 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-wrqgb" podUID="f69b36cb-f694-4e90-b673-47681459414b" Feb 18 00:53:21 crc kubenswrapper[4858]: I0218 00:53:21.558477 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nzjqm" Feb 18 00:53:21 crc kubenswrapper[4858]: I0218 00:53:21.577463 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-nzjqm" podUID="086d4d86-55ee-4c9b-b1c0-5cce4212d8e4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.131:5353: i/o timeout" Feb 18 00:53:21 crc kubenswrapper[4858]: I0218 00:53:21.671909 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr76d\" (UniqueName: \"kubernetes.io/projected/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-kube-api-access-rr76d\") pod \"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4\" (UID: \"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4\") " Feb 18 00:53:21 crc kubenswrapper[4858]: I0218 00:53:21.672058 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-ovsdbserver-sb\") pod \"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4\" (UID: \"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4\") " Feb 18 00:53:21 crc kubenswrapper[4858]: I0218 00:53:21.672129 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-ovsdbserver-nb\") pod \"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4\" (UID: \"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4\") " Feb 18 00:53:21 crc kubenswrapper[4858]: I0218 00:53:21.672213 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-dns-svc\") pod \"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4\" (UID: \"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4\") " Feb 18 00:53:21 crc kubenswrapper[4858]: I0218 00:53:21.672292 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-config\") pod \"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4\" (UID: \"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4\") " Feb 18 00:53:21 crc kubenswrapper[4858]: I0218 00:53:21.693996 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-kube-api-access-rr76d" (OuterVolumeSpecName: "kube-api-access-rr76d") pod "086d4d86-55ee-4c9b-b1c0-5cce4212d8e4" (UID: "086d4d86-55ee-4c9b-b1c0-5cce4212d8e4"). InnerVolumeSpecName "kube-api-access-rr76d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:53:21 crc kubenswrapper[4858]: I0218 00:53:21.721818 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-config" (OuterVolumeSpecName: "config") pod "086d4d86-55ee-4c9b-b1c0-5cce4212d8e4" (UID: "086d4d86-55ee-4c9b-b1c0-5cce4212d8e4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:53:21 crc kubenswrapper[4858]: I0218 00:53:21.725921 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "086d4d86-55ee-4c9b-b1c0-5cce4212d8e4" (UID: "086d4d86-55ee-4c9b-b1c0-5cce4212d8e4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:53:21 crc kubenswrapper[4858]: I0218 00:53:21.732088 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "086d4d86-55ee-4c9b-b1c0-5cce4212d8e4" (UID: "086d4d86-55ee-4c9b-b1c0-5cce4212d8e4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:53:21 crc kubenswrapper[4858]: I0218 00:53:21.736478 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "086d4d86-55ee-4c9b-b1c0-5cce4212d8e4" (UID: "086d4d86-55ee-4c9b-b1c0-5cce4212d8e4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:53:21 crc kubenswrapper[4858]: I0218 00:53:21.775577 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:21 crc kubenswrapper[4858]: I0218 00:53:21.775612 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:21 crc kubenswrapper[4858]: I0218 00:53:21.775628 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:21 crc kubenswrapper[4858]: I0218 00:53:21.775640 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rr76d\" (UniqueName: \"kubernetes.io/projected/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-kube-api-access-rr76d\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:21 crc kubenswrapper[4858]: I0218 00:53:21.775654 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:22 crc kubenswrapper[4858]: I0218 00:53:22.316151 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nzjqm" Feb 18 00:53:22 crc kubenswrapper[4858]: I0218 00:53:22.318159 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nzjqm" event={"ID":"086d4d86-55ee-4c9b-b1c0-5cce4212d8e4","Type":"ContainerDied","Data":"adf0cc1347d8215a4c1d9c0eaaac93bd06a3e1746a5aa06ffd9a99d144852ad6"} Feb 18 00:53:22 crc kubenswrapper[4858]: E0218 00:53:22.319534 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-wrqgb" podUID="f69b36cb-f694-4e90-b673-47681459414b" Feb 18 00:53:22 crc kubenswrapper[4858]: I0218 00:53:22.363829 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nzjqm"] Feb 18 00:53:22 crc kubenswrapper[4858]: I0218 00:53:22.371924 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nzjqm"] Feb 18 00:53:23 crc kubenswrapper[4858]: I0218 00:53:23.438047 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="086d4d86-55ee-4c9b-b1c0-5cce4212d8e4" path="/var/lib/kubelet/pods/086d4d86-55ee-4c9b-b1c0-5cce4212d8e4/volumes" Feb 18 00:53:24 crc kubenswrapper[4858]: I0218 00:53:24.109519 4858 scope.go:117] "RemoveContainer" containerID="83ff457350e9365eff27940cd7989be59e35029c89aa86757ab54679544f31b2" Feb 18 00:53:24 crc kubenswrapper[4858]: I0218 00:53:24.577572 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2cphn"] Feb 18 00:53:24 crc kubenswrapper[4858]: I0218 00:53:24.708858 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 00:53:24 crc kubenswrapper[4858]: I0218 00:53:24.793975 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 00:53:24 crc kubenswrapper[4858]: W0218 00:53:24.985879 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod877dd3dc_e90c_4751_9650_17e13a905e75.slice/crio-b616633e11a4b9ec85b6721f90e7d5fe5f70c7cb7fa161ca85c52c2ec186e345 WatchSource:0}: Error finding container b616633e11a4b9ec85b6721f90e7d5fe5f70c7cb7fa161ca85c52c2ec186e345: Status 404 returned error can't find the container with id b616633e11a4b9ec85b6721f90e7d5fe5f70c7cb7fa161ca85c52c2ec186e345 Feb 18 00:53:24 crc kubenswrapper[4858]: W0218 00:53:24.988789 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod263681ae_36ff_4a39_8e4c_1971633851ee.slice/crio-5c42157a0489161c34ec6ce3d067fcbe87e70d31cd85baba564e165b2ca55a2d WatchSource:0}: Error finding container 5c42157a0489161c34ec6ce3d067fcbe87e70d31cd85baba564e165b2ca55a2d: Status 404 returned error can't find the container with id 5c42157a0489161c34ec6ce3d067fcbe87e70d31cd85baba564e165b2ca55a2d Feb 18 00:53:25 crc kubenswrapper[4858]: E0218 00:53:25.035243 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 18 00:53:25 crc kubenswrapper[4858]: E0218 00:53:25.035299 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 18 00:53:25 crc kubenswrapper[4858]: E0218 00:53:25.035413 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkksg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-bpmww_openstack(48a7b55c-92f4-41e7-b862-45eadd76013b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 00:53:25 crc kubenswrapper[4858]: E0218 00:53:25.036708 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cloudkitty-db-sync-bpmww" podUID="48a7b55c-92f4-41e7-b862-45eadd76013b" Feb 18 00:53:25 crc kubenswrapper[4858]: I0218 00:53:25.059975 4858 scope.go:117] "RemoveContainer" containerID="3d8d7126b7f1876f2a24e6f00836df27e4d81fc2447550999a24305679a913af" Feb 18 00:53:25 crc kubenswrapper[4858]: I0218 00:53:25.181296 4858 scope.go:117] "RemoveContainer" containerID="db35b96b039b721993ad237b685efce5f9ac89543137f076523e4bb6de788a10" Feb 18 00:53:25 crc kubenswrapper[4858]: I0218 00:53:25.248431 4858 scope.go:117] "RemoveContainer" containerID="628528138c29d44263f6ec9ec429257429b2aa999ab82c96f6e33c640535cac8" Feb 18 00:53:25 crc kubenswrapper[4858]: I0218 00:53:25.264955 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:53:25 crc kubenswrapper[4858]: I0218 00:53:25.264997 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:53:25 crc kubenswrapper[4858]: I0218 00:53:25.265036 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 00:53:25 crc kubenswrapper[4858]: I0218 00:53:25.265685 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4583245cbf90bbb57e10dd728d6513f324cbca372738a19b656a2071d981e8c4"} pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:53:25 crc kubenswrapper[4858]: I0218 00:53:25.265731 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" containerID="cri-o://4583245cbf90bbb57e10dd728d6513f324cbca372738a19b656a2071d981e8c4" gracePeriod=600 Feb 18 00:53:25 crc kubenswrapper[4858]: I0218 00:53:25.375267 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"263681ae-36ff-4a39-8e4c-1971633851ee","Type":"ContainerStarted","Data":"5c42157a0489161c34ec6ce3d067fcbe87e70d31cd85baba564e165b2ca55a2d"} Feb 18 00:53:25 crc kubenswrapper[4858]: I0218 00:53:25.386001 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"877dd3dc-e90c-4751-9650-17e13a905e75","Type":"ContainerStarted","Data":"b616633e11a4b9ec85b6721f90e7d5fe5f70c7cb7fa161ca85c52c2ec186e345"} Feb 18 00:53:25 crc kubenswrapper[4858]: I0218 00:53:25.395198 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2cphn" event={"ID":"27254f13-cc74-43cf-9b54-08d87277de31","Type":"ContainerStarted","Data":"fde2c88a403acffe96315b4c3ed906bc50341be27800eb24c88639e794dff289"} Feb 18 00:53:25 crc kubenswrapper[4858]: I0218 00:53:25.411563 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4770fea7-6a1a-44e8-bebe-09220dbc4c71","Type":"ContainerStarted","Data":"ffebf633b81c91d1f3d8ee0291a010200e29d37b5d8bf7f17dbc885d34112dc3"} Feb 18 00:53:25 crc kubenswrapper[4858]: E0218 00:53:25.412303 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-bpmww" podUID="48a7b55c-92f4-41e7-b862-45eadd76013b" Feb 18 00:53:25 crc kubenswrapper[4858]: I0218 00:53:25.559069 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5fcf66f4c6-vkspn"] Feb 18 00:53:25 crc kubenswrapper[4858]: W0218 00:53:25.595197 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2ca2386_8f22_41a1_87ad_7b9ff91754ef.slice/crio-1a66a225282f2210b9456c2ad2f79621cb750f8b27208395511bef1f3ee805ee WatchSource:0}: Error finding container 1a66a225282f2210b9456c2ad2f79621cb750f8b27208395511bef1f3ee805ee: Status 404 returned error can't find the container with id 1a66a225282f2210b9456c2ad2f79621cb750f8b27208395511bef1f3ee805ee Feb 18 00:53:25 crc kubenswrapper[4858]: E0218 00:53:25.649391 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7172df49_6116_4968_a2b5_a1afb116568b.slice/crio-4583245cbf90bbb57e10dd728d6513f324cbca372738a19b656a2071d981e8c4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7172df49_6116_4968_a2b5_a1afb116568b.slice/crio-conmon-4583245cbf90bbb57e10dd728d6513f324cbca372738a19b656a2071d981e8c4.scope\": RecentStats: unable to find data in memory cache]" Feb 18 00:53:25 crc kubenswrapper[4858]: I0218 00:53:25.686068 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f78847c8f-hz7xt"] Feb 18 00:53:25 crc kubenswrapper[4858]: I0218 00:53:25.702884 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-bw4lt"] Feb 18 00:53:26 crc kubenswrapper[4858]: I0218 00:53:26.419777 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f78847c8f-hz7xt" event={"ID":"f15c5e19-1645-4791-8981-2216c5be654b","Type":"ContainerStarted","Data":"cef7714b2664ce0abdce5e860eb77b1e5e1d954e5268f995a053027d4bd06ba8"} Feb 18 00:53:26 crc kubenswrapper[4858]: I0218 00:53:26.423311 4858 generic.go:334] "Generic (PLEG): container finished" podID="7172df49-6116-4968-a2b5-a1afb116568b" containerID="4583245cbf90bbb57e10dd728d6513f324cbca372738a19b656a2071d981e8c4" exitCode=0 Feb 18 00:53:26 crc kubenswrapper[4858]: I0218 00:53:26.423389 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerDied","Data":"4583245cbf90bbb57e10dd728d6513f324cbca372738a19b656a2071d981e8c4"} Feb 18 00:53:26 crc kubenswrapper[4858]: I0218 00:53:26.423441 4858 scope.go:117] "RemoveContainer" containerID="4232da30acbadfd7cea82898dfbb989b7b4cf3f5d440e264df04ddfd7051cdf7" Feb 18 00:53:26 crc kubenswrapper[4858]: I0218 00:53:26.436836 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x4wqp" event={"ID":"423548cb-6c87-4876-a08c-fd64805971ea","Type":"ContainerStarted","Data":"1c85fdcbed14012ce2425b4f6a426c0e3b08b72d2006aac0bcc305570620b15d"} Feb 18 00:53:26 crc kubenswrapper[4858]: I0218 00:53:26.439234 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" event={"ID":"1c2489a7-5053-411a-9df6-8d6a659a36e2","Type":"ContainerStarted","Data":"578799dc36a99317241c52b736df5470b1c27a284bb32c8fe3175ae4a41223e0"} Feb 18 00:53:26 crc kubenswrapper[4858]: I0218 00:53:26.444811 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fcf66f4c6-vkspn" event={"ID":"d2ca2386-8f22-41a1-87ad-7b9ff91754ef","Type":"ContainerStarted","Data":"1a66a225282f2210b9456c2ad2f79621cb750f8b27208395511bef1f3ee805ee"} Feb 18 00:53:26 crc kubenswrapper[4858]: I0218 00:53:26.446346 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-k2wn6" event={"ID":"8b6ceabb-aac4-48fc-9d11-abbedea94d2d","Type":"ContainerStarted","Data":"08dad992e4ffad74a4a43b675f9fb4b6a788c6f31bf74345751b5438743b1ba5"} Feb 18 00:53:26 crc kubenswrapper[4858]: I0218 00:53:26.447904 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2cphn" event={"ID":"27254f13-cc74-43cf-9b54-08d87277de31","Type":"ContainerStarted","Data":"f3811c6e5165da31a9b11d83e47c6fd1e6e32765366f2a4949b7e3cba6cc0f9f"} Feb 18 00:53:26 crc kubenswrapper[4858]: I0218 00:53:26.467192 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-x4wqp" podStartSLOduration=3.281122167 podStartE2EDuration="41.467174368s" podCreationTimestamp="2026-02-18 00:52:45 +0000 UTC" firstStartedPulling="2026-02-18 00:52:46.904937557 +0000 UTC m=+1120.210774289" lastFinishedPulling="2026-02-18 00:53:25.090989758 +0000 UTC m=+1158.396826490" observedRunningTime="2026-02-18 00:53:26.466040722 +0000 UTC m=+1159.771877474" watchObservedRunningTime="2026-02-18 00:53:26.467174368 +0000 UTC m=+1159.773011100" Feb 18 00:53:26 crc kubenswrapper[4858]: I0218 00:53:26.487407 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-k2wn6" podStartSLOduration=9.346096962 podStartE2EDuration="41.487389519s" podCreationTimestamp="2026-02-18 00:52:45 +0000 UTC" firstStartedPulling="2026-02-18 00:52:47.056725215 +0000 UTC m=+1120.362561947" lastFinishedPulling="2026-02-18 00:53:19.198017762 +0000 UTC m=+1152.503854504" observedRunningTime="2026-02-18 00:53:26.486478766 +0000 UTC m=+1159.792315498" watchObservedRunningTime="2026-02-18 00:53:26.487389519 +0000 UTC m=+1159.793226251" Feb 18 00:53:27 crc kubenswrapper[4858]: I0218 00:53:27.473048 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerStarted","Data":"3b23372d3ce28e5b83a0ba7f985899a4c07e0af615e7bbb60af93be8938ae620"} Feb 18 00:53:27 crc kubenswrapper[4858]: I0218 00:53:27.765404 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2cphn" podStartSLOduration=29.76538571 podStartE2EDuration="29.76538571s" podCreationTimestamp="2026-02-18 00:52:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:53:27.761118156 +0000 UTC m=+1161.066954888" watchObservedRunningTime="2026-02-18 00:53:27.76538571 +0000 UTC m=+1161.071222442" Feb 18 00:53:28 crc kubenswrapper[4858]: I0218 00:53:28.498099 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"877dd3dc-e90c-4751-9650-17e13a905e75","Type":"ContainerStarted","Data":"e50737d2634a0884d1f67434f82949f5adc239249cb256377b6b080f5b7742e8"} Feb 18 00:53:28 crc kubenswrapper[4858]: I0218 00:53:28.507913 4858 generic.go:334] "Generic (PLEG): container finished" podID="1c2489a7-5053-411a-9df6-8d6a659a36e2" containerID="af09bf3cae731673fe3f9b669d6f61d904f19d825eddfe38cb5b398727fbdcf8" exitCode=0 Feb 18 00:53:28 crc kubenswrapper[4858]: I0218 00:53:28.508051 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" event={"ID":"1c2489a7-5053-411a-9df6-8d6a659a36e2","Type":"ContainerDied","Data":"af09bf3cae731673fe3f9b669d6f61d904f19d825eddfe38cb5b398727fbdcf8"} Feb 18 00:53:28 crc kubenswrapper[4858]: I0218 00:53:28.539557 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fcf66f4c6-vkspn" event={"ID":"d2ca2386-8f22-41a1-87ad-7b9ff91754ef","Type":"ContainerStarted","Data":"9edd90bbbbb35663d5f161377118de61616fcf832527ac5a73b726c83781b6ce"} Feb 18 00:53:28 crc kubenswrapper[4858]: I0218 00:53:28.556552 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f78847c8f-hz7xt" event={"ID":"f15c5e19-1645-4791-8981-2216c5be654b","Type":"ContainerStarted","Data":"2cdc75157a3e22acce190039d63927731744843656fae2003afbeb33ed574364"} Feb 18 00:53:28 crc kubenswrapper[4858]: I0218 00:53:28.565528 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"263681ae-36ff-4a39-8e4c-1971633851ee","Type":"ContainerStarted","Data":"1f6b2284735f799f240e4d480cbcf152034af2f5fa8b634f38348028a5fc3972"} Feb 18 00:53:29 crc kubenswrapper[4858]: I0218 00:53:29.608368 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"877dd3dc-e90c-4751-9650-17e13a905e75","Type":"ContainerStarted","Data":"6cd94bd78ba937632315375bbdd6808fd6c156139a11596f39886415b9f38a6f"} Feb 18 00:53:29 crc kubenswrapper[4858]: I0218 00:53:29.614870 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" event={"ID":"1c2489a7-5053-411a-9df6-8d6a659a36e2","Type":"ContainerStarted","Data":"344900eec47cc44ef1c3ed3261f039350fe922448c79e1530497b1ad9c8c070e"} Feb 18 00:53:29 crc kubenswrapper[4858]: I0218 00:53:29.615752 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" Feb 18 00:53:29 crc kubenswrapper[4858]: I0218 00:53:29.619294 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fcf66f4c6-vkspn" event={"ID":"d2ca2386-8f22-41a1-87ad-7b9ff91754ef","Type":"ContainerStarted","Data":"a128da7152c208000c536c7699b24204df2cda117d1aeef05c3105b57de627ee"} Feb 18 00:53:29 crc kubenswrapper[4858]: I0218 00:53:29.619431 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5fcf66f4c6-vkspn" Feb 18 00:53:29 crc kubenswrapper[4858]: I0218 00:53:29.621765 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f78847c8f-hz7xt" event={"ID":"f15c5e19-1645-4791-8981-2216c5be654b","Type":"ContainerStarted","Data":"e498507e534f9b9712c7af1d4443f81de4bc7b4ffabd92ef3be6b82a7eda3f54"} Feb 18 00:53:29 crc kubenswrapper[4858]: I0218 00:53:29.621951 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6f78847c8f-hz7xt" Feb 18 00:53:29 crc kubenswrapper[4858]: I0218 00:53:29.624615 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"263681ae-36ff-4a39-8e4c-1971633851ee","Type":"ContainerStarted","Data":"fd94bfaca7bd87c61434531ee1d443692bedf62d48845ab6d8bed005492c97a9"} Feb 18 00:53:29 crc kubenswrapper[4858]: I0218 00:53:29.644705 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=31.644686678 podStartE2EDuration="31.644686678s" podCreationTimestamp="2026-02-18 00:52:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:53:29.640250411 +0000 UTC m=+1162.946087163" watchObservedRunningTime="2026-02-18 00:53:29.644686678 +0000 UTC m=+1162.950523410" Feb 18 00:53:29 crc kubenswrapper[4858]: I0218 00:53:29.669080 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6f78847c8f-hz7xt" podStartSLOduration=10.669061489 podStartE2EDuration="10.669061489s" podCreationTimestamp="2026-02-18 00:53:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:53:29.657172691 +0000 UTC m=+1162.963009423" watchObservedRunningTime="2026-02-18 00:53:29.669061489 +0000 UTC m=+1162.974898221" Feb 18 00:53:29 crc kubenswrapper[4858]: I0218 00:53:29.683404 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=31.683387395 podStartE2EDuration="31.683387395s" podCreationTimestamp="2026-02-18 00:52:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:53:29.68274524 +0000 UTC m=+1162.988581972" watchObservedRunningTime="2026-02-18 00:53:29.683387395 +0000 UTC m=+1162.989224127" Feb 18 00:53:29 crc kubenswrapper[4858]: I0218 00:53:29.716685 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" podStartSLOduration=12.716662872 podStartE2EDuration="12.716662872s" podCreationTimestamp="2026-02-18 00:53:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:53:29.708374571 +0000 UTC m=+1163.014211303" watchObservedRunningTime="2026-02-18 00:53:29.716662872 +0000 UTC m=+1163.022499604" Feb 18 00:53:29 crc kubenswrapper[4858]: I0218 00:53:29.732013 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5fcf66f4c6-vkspn" podStartSLOduration=12.731993563 podStartE2EDuration="12.731993563s" podCreationTimestamp="2026-02-18 00:53:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:53:29.724480051 +0000 UTC m=+1163.030316773" watchObservedRunningTime="2026-02-18 00:53:29.731993563 +0000 UTC m=+1163.037830295" Feb 18 00:53:30 crc kubenswrapper[4858]: I0218 00:53:30.635965 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4770fea7-6a1a-44e8-bebe-09220dbc4c71","Type":"ContainerStarted","Data":"32988669988c871a59910744c626cb00c849036db3f2e8d590654c6856162836"} Feb 18 00:53:30 crc kubenswrapper[4858]: I0218 00:53:30.638746 4858 generic.go:334] "Generic (PLEG): container finished" podID="8b6ceabb-aac4-48fc-9d11-abbedea94d2d" containerID="08dad992e4ffad74a4a43b675f9fb4b6a788c6f31bf74345751b5438743b1ba5" exitCode=0 Feb 18 00:53:30 crc kubenswrapper[4858]: I0218 00:53:30.638810 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-k2wn6" event={"ID":"8b6ceabb-aac4-48fc-9d11-abbedea94d2d","Type":"ContainerDied","Data":"08dad992e4ffad74a4a43b675f9fb4b6a788c6f31bf74345751b5438743b1ba5"} Feb 18 00:53:30 crc kubenswrapper[4858]: I0218 00:53:30.641683 4858 generic.go:334] "Generic (PLEG): container finished" podID="27254f13-cc74-43cf-9b54-08d87277de31" containerID="f3811c6e5165da31a9b11d83e47c6fd1e6e32765366f2a4949b7e3cba6cc0f9f" exitCode=0 Feb 18 00:53:30 crc kubenswrapper[4858]: I0218 00:53:30.641910 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2cphn" event={"ID":"27254f13-cc74-43cf-9b54-08d87277de31","Type":"ContainerDied","Data":"f3811c6e5165da31a9b11d83e47c6fd1e6e32765366f2a4949b7e3cba6cc0f9f"} Feb 18 00:53:31 crc kubenswrapper[4858]: I0218 00:53:31.656021 4858 generic.go:334] "Generic (PLEG): container finished" podID="423548cb-6c87-4876-a08c-fd64805971ea" containerID="1c85fdcbed14012ce2425b4f6a426c0e3b08b72d2006aac0bcc305570620b15d" exitCode=0 Feb 18 00:53:31 crc kubenswrapper[4858]: I0218 00:53:31.656102 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x4wqp" event={"ID":"423548cb-6c87-4876-a08c-fd64805971ea","Type":"ContainerDied","Data":"1c85fdcbed14012ce2425b4f6a426c0e3b08b72d2006aac0bcc305570620b15d"} Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.802205 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x4wqp" Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.805653 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2cphn" Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.817290 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-k2wn6" Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.899239 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-credential-keys\") pod \"27254f13-cc74-43cf-9b54-08d87277de31\" (UID: \"27254f13-cc74-43cf-9b54-08d87277de31\") " Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.899292 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-scripts\") pod \"27254f13-cc74-43cf-9b54-08d87277de31\" (UID: \"27254f13-cc74-43cf-9b54-08d87277de31\") " Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.899328 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g95v\" (UniqueName: \"kubernetes.io/projected/27254f13-cc74-43cf-9b54-08d87277de31-kube-api-access-4g95v\") pod \"27254f13-cc74-43cf-9b54-08d87277de31\" (UID: \"27254f13-cc74-43cf-9b54-08d87277de31\") " Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.899347 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-logs\") pod \"8b6ceabb-aac4-48fc-9d11-abbedea94d2d\" (UID: \"8b6ceabb-aac4-48fc-9d11-abbedea94d2d\") " Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.899469 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-scripts\") pod \"8b6ceabb-aac4-48fc-9d11-abbedea94d2d\" (UID: \"8b6ceabb-aac4-48fc-9d11-abbedea94d2d\") " Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.899526 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v47m\" (UniqueName: \"kubernetes.io/projected/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-kube-api-access-7v47m\") pod \"8b6ceabb-aac4-48fc-9d11-abbedea94d2d\" (UID: \"8b6ceabb-aac4-48fc-9d11-abbedea94d2d\") " Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.899571 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-config-data\") pod \"27254f13-cc74-43cf-9b54-08d87277de31\" (UID: \"27254f13-cc74-43cf-9b54-08d87277de31\") " Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.899588 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xbk4\" (UniqueName: \"kubernetes.io/projected/423548cb-6c87-4876-a08c-fd64805971ea-kube-api-access-4xbk4\") pod \"423548cb-6c87-4876-a08c-fd64805971ea\" (UID: \"423548cb-6c87-4876-a08c-fd64805971ea\") " Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.899604 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423548cb-6c87-4876-a08c-fd64805971ea-combined-ca-bundle\") pod \"423548cb-6c87-4876-a08c-fd64805971ea\" (UID: \"423548cb-6c87-4876-a08c-fd64805971ea\") " Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.899639 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-combined-ca-bundle\") pod \"8b6ceabb-aac4-48fc-9d11-abbedea94d2d\" (UID: \"8b6ceabb-aac4-48fc-9d11-abbedea94d2d\") " Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.899669 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-combined-ca-bundle\") pod \"27254f13-cc74-43cf-9b54-08d87277de31\" (UID: \"27254f13-cc74-43cf-9b54-08d87277de31\") " Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.899701 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-fernet-keys\") pod \"27254f13-cc74-43cf-9b54-08d87277de31\" (UID: \"27254f13-cc74-43cf-9b54-08d87277de31\") " Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.899747 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/423548cb-6c87-4876-a08c-fd64805971ea-db-sync-config-data\") pod \"423548cb-6c87-4876-a08c-fd64805971ea\" (UID: \"423548cb-6c87-4876-a08c-fd64805971ea\") " Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.899767 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-config-data\") pod \"8b6ceabb-aac4-48fc-9d11-abbedea94d2d\" (UID: \"8b6ceabb-aac4-48fc-9d11-abbedea94d2d\") " Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.912180 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/423548cb-6c87-4876-a08c-fd64805971ea-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "423548cb-6c87-4876-a08c-fd64805971ea" (UID: "423548cb-6c87-4876-a08c-fd64805971ea"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.913140 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-logs" (OuterVolumeSpecName: "logs") pod "8b6ceabb-aac4-48fc-9d11-abbedea94d2d" (UID: "8b6ceabb-aac4-48fc-9d11-abbedea94d2d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.913799 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "27254f13-cc74-43cf-9b54-08d87277de31" (UID: "27254f13-cc74-43cf-9b54-08d87277de31"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.914438 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-kube-api-access-7v47m" (OuterVolumeSpecName: "kube-api-access-7v47m") pod "8b6ceabb-aac4-48fc-9d11-abbedea94d2d" (UID: "8b6ceabb-aac4-48fc-9d11-abbedea94d2d"). InnerVolumeSpecName "kube-api-access-7v47m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.914478 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/423548cb-6c87-4876-a08c-fd64805971ea-kube-api-access-4xbk4" (OuterVolumeSpecName: "kube-api-access-4xbk4") pod "423548cb-6c87-4876-a08c-fd64805971ea" (UID: "423548cb-6c87-4876-a08c-fd64805971ea"). InnerVolumeSpecName "kube-api-access-4xbk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.914801 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-scripts" (OuterVolumeSpecName: "scripts") pod "8b6ceabb-aac4-48fc-9d11-abbedea94d2d" (UID: "8b6ceabb-aac4-48fc-9d11-abbedea94d2d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.927067 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27254f13-cc74-43cf-9b54-08d87277de31-kube-api-access-4g95v" (OuterVolumeSpecName: "kube-api-access-4g95v") pod "27254f13-cc74-43cf-9b54-08d87277de31" (UID: "27254f13-cc74-43cf-9b54-08d87277de31"). InnerVolumeSpecName "kube-api-access-4g95v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.934200 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "27254f13-cc74-43cf-9b54-08d87277de31" (UID: "27254f13-cc74-43cf-9b54-08d87277de31"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.934270 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-scripts" (OuterVolumeSpecName: "scripts") pod "27254f13-cc74-43cf-9b54-08d87277de31" (UID: "27254f13-cc74-43cf-9b54-08d87277de31"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.957894 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27254f13-cc74-43cf-9b54-08d87277de31" (UID: "27254f13-cc74-43cf-9b54-08d87277de31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.962605 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-config-data" (OuterVolumeSpecName: "config-data") pod "27254f13-cc74-43cf-9b54-08d87277de31" (UID: "27254f13-cc74-43cf-9b54-08d87277de31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.989606 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-config-data" (OuterVolumeSpecName: "config-data") pod "8b6ceabb-aac4-48fc-9d11-abbedea94d2d" (UID: "8b6ceabb-aac4-48fc-9d11-abbedea94d2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:34 crc kubenswrapper[4858]: I0218 00:53:34.995003 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b6ceabb-aac4-48fc-9d11-abbedea94d2d" (UID: "8b6ceabb-aac4-48fc-9d11-abbedea94d2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.002285 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.002315 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v47m\" (UniqueName: \"kubernetes.io/projected/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-kube-api-access-7v47m\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.002328 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.002338 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xbk4\" (UniqueName: \"kubernetes.io/projected/423548cb-6c87-4876-a08c-fd64805971ea-kube-api-access-4xbk4\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.002346 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.002354 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.002361 4858 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.002369 4858 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/423548cb-6c87-4876-a08c-fd64805971ea-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.002377 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.002385 4858 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.002392 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27254f13-cc74-43cf-9b54-08d87277de31-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.002401 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g95v\" (UniqueName: \"kubernetes.io/projected/27254f13-cc74-43cf-9b54-08d87277de31-kube-api-access-4g95v\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.002410 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b6ceabb-aac4-48fc-9d11-abbedea94d2d-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.009673 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/423548cb-6c87-4876-a08c-fd64805971ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "423548cb-6c87-4876-a08c-fd64805971ea" (UID: "423548cb-6c87-4876-a08c-fd64805971ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.104193 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423548cb-6c87-4876-a08c-fd64805971ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.698859 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4770fea7-6a1a-44e8-bebe-09220dbc4c71","Type":"ContainerStarted","Data":"5e97e81cbdb2397d1acc161131ae6b1fc3124d6193fbfe8b4de2d7641ce2d09d"} Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.700948 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x4wqp" Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.700963 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x4wqp" event={"ID":"423548cb-6c87-4876-a08c-fd64805971ea","Type":"ContainerDied","Data":"de074d2ac6cdf89afd0b1d14340f07d4ae8f8344974f6dc3f226ecdaa97e9aca"} Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.701015 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de074d2ac6cdf89afd0b1d14340f07d4ae8f8344974f6dc3f226ecdaa97e9aca" Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.702563 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-k2wn6" event={"ID":"8b6ceabb-aac4-48fc-9d11-abbedea94d2d","Type":"ContainerDied","Data":"ff5bbb02b35bc62b1f6ff28ebb7cc0fb68c2f64af24c3308c7143e88b98fd98b"} Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.702681 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff5bbb02b35bc62b1f6ff28ebb7cc0fb68c2f64af24c3308c7143e88b98fd98b" Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.702611 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-k2wn6" Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.704672 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2cphn" event={"ID":"27254f13-cc74-43cf-9b54-08d87277de31","Type":"ContainerDied","Data":"fde2c88a403acffe96315b4c3ed906bc50341be27800eb24c88639e794dff289"} Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.704707 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fde2c88a403acffe96315b4c3ed906bc50341be27800eb24c88639e794dff289" Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.704678 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2cphn" Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.706759 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wrqgb" event={"ID":"f69b36cb-f694-4e90-b673-47681459414b","Type":"ContainerStarted","Data":"a083da6369422ac1d40b19b03f96614ec30f4c94c278c782d9005c5565f2464a"} Feb 18 00:53:35 crc kubenswrapper[4858]: I0218 00:53:35.726146 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-wrqgb" podStartSLOduration=2.8218653700000003 podStartE2EDuration="50.726126789s" podCreationTimestamp="2026-02-18 00:52:45 +0000 UTC" firstStartedPulling="2026-02-18 00:52:46.90466009 +0000 UTC m=+1120.210496822" lastFinishedPulling="2026-02-18 00:53:34.808921489 +0000 UTC m=+1168.114758241" observedRunningTime="2026-02-18 00:53:35.72405215 +0000 UTC m=+1169.029888892" watchObservedRunningTime="2026-02-18 00:53:35.726126789 +0000 UTC m=+1169.031963521" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.025282 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5cf649f6f9-dtsbl"] Feb 18 00:53:36 crc kubenswrapper[4858]: E0218 00:53:36.025714 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="086d4d86-55ee-4c9b-b1c0-5cce4212d8e4" containerName="init" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.025726 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="086d4d86-55ee-4c9b-b1c0-5cce4212d8e4" containerName="init" Feb 18 00:53:36 crc kubenswrapper[4858]: E0218 00:53:36.025738 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="423548cb-6c87-4876-a08c-fd64805971ea" containerName="barbican-db-sync" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.025744 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="423548cb-6c87-4876-a08c-fd64805971ea" containerName="barbican-db-sync" Feb 18 00:53:36 crc kubenswrapper[4858]: E0218 00:53:36.025761 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b6ceabb-aac4-48fc-9d11-abbedea94d2d" containerName="placement-db-sync" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.025768 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b6ceabb-aac4-48fc-9d11-abbedea94d2d" containerName="placement-db-sync" Feb 18 00:53:36 crc kubenswrapper[4858]: E0218 00:53:36.025782 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27254f13-cc74-43cf-9b54-08d87277de31" containerName="keystone-bootstrap" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.025787 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="27254f13-cc74-43cf-9b54-08d87277de31" containerName="keystone-bootstrap" Feb 18 00:53:36 crc kubenswrapper[4858]: E0218 00:53:36.025800 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="086d4d86-55ee-4c9b-b1c0-5cce4212d8e4" containerName="dnsmasq-dns" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.025806 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="086d4d86-55ee-4c9b-b1c0-5cce4212d8e4" containerName="dnsmasq-dns" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.025967 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="27254f13-cc74-43cf-9b54-08d87277de31" containerName="keystone-bootstrap" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.025982 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="423548cb-6c87-4876-a08c-fd64805971ea" containerName="barbican-db-sync" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.025990 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b6ceabb-aac4-48fc-9d11-abbedea94d2d" containerName="placement-db-sync" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.026011 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="086d4d86-55ee-4c9b-b1c0-5cce4212d8e4" containerName="dnsmasq-dns" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.026704 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.039250 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.039419 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.039523 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.039612 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x4lrd" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.039696 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.039805 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.045537 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-564596946d-g2qdq"] Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.047007 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-564596946d-g2qdq" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.054635 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.054706 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.054877 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.054973 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.055090 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-vktvv" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.064538 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5cf649f6f9-dtsbl"] Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.067890 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-564596946d-g2qdq"] Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.129416 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xpv4\" (UniqueName: \"kubernetes.io/projected/245419e7-d61b-4f15-acef-861d6025e566-kube-api-access-8xpv4\") pod \"placement-564596946d-g2qdq\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " pod="openstack/placement-564596946d-g2qdq" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.129670 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-internal-tls-certs\") pod \"placement-564596946d-g2qdq\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " pod="openstack/placement-564596946d-g2qdq" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.129722 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-public-tls-certs\") pod \"placement-564596946d-g2qdq\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " pod="openstack/placement-564596946d-g2qdq" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.129749 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/26a5ef88-d04d-4360-97b2-de3aab55c822-credential-keys\") pod \"keystone-5cf649f6f9-dtsbl\" (UID: \"26a5ef88-d04d-4360-97b2-de3aab55c822\") " pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.129785 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26a5ef88-d04d-4360-97b2-de3aab55c822-scripts\") pod \"keystone-5cf649f6f9-dtsbl\" (UID: \"26a5ef88-d04d-4360-97b2-de3aab55c822\") " pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.129800 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26a5ef88-d04d-4360-97b2-de3aab55c822-public-tls-certs\") pod \"keystone-5cf649f6f9-dtsbl\" (UID: \"26a5ef88-d04d-4360-97b2-de3aab55c822\") " pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.129826 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26a5ef88-d04d-4360-97b2-de3aab55c822-combined-ca-bundle\") pod \"keystone-5cf649f6f9-dtsbl\" (UID: \"26a5ef88-d04d-4360-97b2-de3aab55c822\") " pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.129859 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26a5ef88-d04d-4360-97b2-de3aab55c822-fernet-keys\") pod \"keystone-5cf649f6f9-dtsbl\" (UID: \"26a5ef88-d04d-4360-97b2-de3aab55c822\") " pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.129883 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/245419e7-d61b-4f15-acef-861d6025e566-logs\") pod \"placement-564596946d-g2qdq\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " pod="openstack/placement-564596946d-g2qdq" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.129915 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-scripts\") pod \"placement-564596946d-g2qdq\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " pod="openstack/placement-564596946d-g2qdq" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.129928 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-combined-ca-bundle\") pod \"placement-564596946d-g2qdq\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " pod="openstack/placement-564596946d-g2qdq" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.129959 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdfn8\" (UniqueName: \"kubernetes.io/projected/26a5ef88-d04d-4360-97b2-de3aab55c822-kube-api-access-zdfn8\") pod \"keystone-5cf649f6f9-dtsbl\" (UID: \"26a5ef88-d04d-4360-97b2-de3aab55c822\") " pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.129979 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26a5ef88-d04d-4360-97b2-de3aab55c822-internal-tls-certs\") pod \"keystone-5cf649f6f9-dtsbl\" (UID: \"26a5ef88-d04d-4360-97b2-de3aab55c822\") " pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.129999 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26a5ef88-d04d-4360-97b2-de3aab55c822-config-data\") pod \"keystone-5cf649f6f9-dtsbl\" (UID: \"26a5ef88-d04d-4360-97b2-de3aab55c822\") " pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.130021 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-config-data\") pod \"placement-564596946d-g2qdq\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " pod="openstack/placement-564596946d-g2qdq" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.166539 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5d98494dc7-ncwkj"] Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.168103 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5d98494dc7-ncwkj" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.172461 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-684c84b858-zh69p"] Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.173947 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-684c84b858-zh69p" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.174945 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.174997 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-vpb4j" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.175344 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.175485 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.198449 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5d98494dc7-ncwkj"] Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.209357 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-684c84b858-zh69p"] Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.259511 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xpv4\" (UniqueName: \"kubernetes.io/projected/245419e7-d61b-4f15-acef-861d6025e566-kube-api-access-8xpv4\") pod \"placement-564596946d-g2qdq\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " pod="openstack/placement-564596946d-g2qdq" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.259564 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-internal-tls-certs\") pod \"placement-564596946d-g2qdq\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " pod="openstack/placement-564596946d-g2qdq" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.259588 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-config-data\") pod \"barbican-keystone-listener-684c84b858-zh69p\" (UID: \"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c\") " pod="openstack/barbican-keystone-listener-684c84b858-zh69p" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.259607 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-config-data-custom\") pod \"barbican-keystone-listener-684c84b858-zh69p\" (UID: \"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c\") " pod="openstack/barbican-keystone-listener-684c84b858-zh69p" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.259631 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcwmc\" (UniqueName: \"kubernetes.io/projected/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-kube-api-access-xcwmc\") pod \"barbican-keystone-listener-684c84b858-zh69p\" (UID: \"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c\") " pod="openstack/barbican-keystone-listener-684c84b858-zh69p" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.259661 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-public-tls-certs\") pod \"placement-564596946d-g2qdq\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " pod="openstack/placement-564596946d-g2qdq" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.259678 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-logs\") pod \"barbican-keystone-listener-684c84b858-zh69p\" (UID: \"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c\") " pod="openstack/barbican-keystone-listener-684c84b858-zh69p" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.259696 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/26a5ef88-d04d-4360-97b2-de3aab55c822-credential-keys\") pod \"keystone-5cf649f6f9-dtsbl\" (UID: \"26a5ef88-d04d-4360-97b2-de3aab55c822\") " pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.259723 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26a5ef88-d04d-4360-97b2-de3aab55c822-scripts\") pod \"keystone-5cf649f6f9-dtsbl\" (UID: \"26a5ef88-d04d-4360-97b2-de3aab55c822\") " pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.259737 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26a5ef88-d04d-4360-97b2-de3aab55c822-public-tls-certs\") pod \"keystone-5cf649f6f9-dtsbl\" (UID: \"26a5ef88-d04d-4360-97b2-de3aab55c822\") " pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.259754 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-config-data\") pod \"barbican-worker-5d98494dc7-ncwkj\" (UID: \"6268a0a7-fb2d-437c-9a9a-3003a640f5b6\") " pod="openstack/barbican-worker-5d98494dc7-ncwkj" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.259779 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26a5ef88-d04d-4360-97b2-de3aab55c822-combined-ca-bundle\") pod \"keystone-5cf649f6f9-dtsbl\" (UID: \"26a5ef88-d04d-4360-97b2-de3aab55c822\") " pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.259797 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-config-data-custom\") pod \"barbican-worker-5d98494dc7-ncwkj\" (UID: \"6268a0a7-fb2d-437c-9a9a-3003a640f5b6\") " pod="openstack/barbican-worker-5d98494dc7-ncwkj" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.259825 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26a5ef88-d04d-4360-97b2-de3aab55c822-fernet-keys\") pod \"keystone-5cf649f6f9-dtsbl\" (UID: \"26a5ef88-d04d-4360-97b2-de3aab55c822\") " pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.259848 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/245419e7-d61b-4f15-acef-861d6025e566-logs\") pod \"placement-564596946d-g2qdq\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " pod="openstack/placement-564596946d-g2qdq" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.259868 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-combined-ca-bundle\") pod \"barbican-keystone-listener-684c84b858-zh69p\" (UID: \"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c\") " pod="openstack/barbican-keystone-listener-684c84b858-zh69p" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.259891 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wh8j\" (UniqueName: \"kubernetes.io/projected/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-kube-api-access-9wh8j\") pod \"barbican-worker-5d98494dc7-ncwkj\" (UID: \"6268a0a7-fb2d-437c-9a9a-3003a640f5b6\") " pod="openstack/barbican-worker-5d98494dc7-ncwkj" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.259912 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-scripts\") pod \"placement-564596946d-g2qdq\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " pod="openstack/placement-564596946d-g2qdq" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.259925 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-combined-ca-bundle\") pod \"placement-564596946d-g2qdq\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " pod="openstack/placement-564596946d-g2qdq" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.259944 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-logs\") pod \"barbican-worker-5d98494dc7-ncwkj\" (UID: \"6268a0a7-fb2d-437c-9a9a-3003a640f5b6\") " pod="openstack/barbican-worker-5d98494dc7-ncwkj" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.259961 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-combined-ca-bundle\") pod \"barbican-worker-5d98494dc7-ncwkj\" (UID: \"6268a0a7-fb2d-437c-9a9a-3003a640f5b6\") " pod="openstack/barbican-worker-5d98494dc7-ncwkj" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.259982 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdfn8\" (UniqueName: \"kubernetes.io/projected/26a5ef88-d04d-4360-97b2-de3aab55c822-kube-api-access-zdfn8\") pod \"keystone-5cf649f6f9-dtsbl\" (UID: \"26a5ef88-d04d-4360-97b2-de3aab55c822\") " pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.259998 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26a5ef88-d04d-4360-97b2-de3aab55c822-internal-tls-certs\") pod \"keystone-5cf649f6f9-dtsbl\" (UID: \"26a5ef88-d04d-4360-97b2-de3aab55c822\") " pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.260030 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26a5ef88-d04d-4360-97b2-de3aab55c822-config-data\") pod \"keystone-5cf649f6f9-dtsbl\" (UID: \"26a5ef88-d04d-4360-97b2-de3aab55c822\") " pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.260053 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-config-data\") pod \"placement-564596946d-g2qdq\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " pod="openstack/placement-564596946d-g2qdq" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.264485 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/245419e7-d61b-4f15-acef-861d6025e566-logs\") pod \"placement-564596946d-g2qdq\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " pod="openstack/placement-564596946d-g2qdq" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.267194 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-bw4lt"] Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.267467 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" podUID="1c2489a7-5053-411a-9df6-8d6a659a36e2" containerName="dnsmasq-dns" containerID="cri-o://344900eec47cc44ef1c3ed3261f039350fe922448c79e1530497b1ad9c8c070e" gracePeriod=10 Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.270673 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.274474 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-internal-tls-certs\") pod \"placement-564596946d-g2qdq\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " pod="openstack/placement-564596946d-g2qdq" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.285068 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/26a5ef88-d04d-4360-97b2-de3aab55c822-credential-keys\") pod \"keystone-5cf649f6f9-dtsbl\" (UID: \"26a5ef88-d04d-4360-97b2-de3aab55c822\") " pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.298818 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-scripts\") pod \"placement-564596946d-g2qdq\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " pod="openstack/placement-564596946d-g2qdq" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.299404 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-config-data\") pod \"placement-564596946d-g2qdq\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " pod="openstack/placement-564596946d-g2qdq" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.299860 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-public-tls-certs\") pod \"placement-564596946d-g2qdq\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " pod="openstack/placement-564596946d-g2qdq" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.300716 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xpv4\" (UniqueName: \"kubernetes.io/projected/245419e7-d61b-4f15-acef-861d6025e566-kube-api-access-8xpv4\") pod \"placement-564596946d-g2qdq\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " pod="openstack/placement-564596946d-g2qdq" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.309406 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26a5ef88-d04d-4360-97b2-de3aab55c822-scripts\") pod \"keystone-5cf649f6f9-dtsbl\" (UID: \"26a5ef88-d04d-4360-97b2-de3aab55c822\") " pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.310141 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26a5ef88-d04d-4360-97b2-de3aab55c822-config-data\") pod \"keystone-5cf649f6f9-dtsbl\" (UID: \"26a5ef88-d04d-4360-97b2-de3aab55c822\") " pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.310632 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26a5ef88-d04d-4360-97b2-de3aab55c822-internal-tls-certs\") pod \"keystone-5cf649f6f9-dtsbl\" (UID: \"26a5ef88-d04d-4360-97b2-de3aab55c822\") " pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.312477 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-combined-ca-bundle\") pod \"placement-564596946d-g2qdq\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " pod="openstack/placement-564596946d-g2qdq" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.312900 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26a5ef88-d04d-4360-97b2-de3aab55c822-public-tls-certs\") pod \"keystone-5cf649f6f9-dtsbl\" (UID: \"26a5ef88-d04d-4360-97b2-de3aab55c822\") " pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.324321 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26a5ef88-d04d-4360-97b2-de3aab55c822-fernet-keys\") pod \"keystone-5cf649f6f9-dtsbl\" (UID: \"26a5ef88-d04d-4360-97b2-de3aab55c822\") " pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.332225 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-qtmhw"] Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.333696 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.335213 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26a5ef88-d04d-4360-97b2-de3aab55c822-combined-ca-bundle\") pod \"keystone-5cf649f6f9-dtsbl\" (UID: \"26a5ef88-d04d-4360-97b2-de3aab55c822\") " pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.337682 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdfn8\" (UniqueName: \"kubernetes.io/projected/26a5ef88-d04d-4360-97b2-de3aab55c822-kube-api-access-zdfn8\") pod \"keystone-5cf649f6f9-dtsbl\" (UID: \"26a5ef88-d04d-4360-97b2-de3aab55c822\") " pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.349323 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-qtmhw"] Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.361846 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-config-data\") pod \"barbican-keystone-listener-684c84b858-zh69p\" (UID: \"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c\") " pod="openstack/barbican-keystone-listener-684c84b858-zh69p" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.361889 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-config-data-custom\") pod \"barbican-keystone-listener-684c84b858-zh69p\" (UID: \"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c\") " pod="openstack/barbican-keystone-listener-684c84b858-zh69p" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.361921 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcwmc\" (UniqueName: \"kubernetes.io/projected/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-kube-api-access-xcwmc\") pod \"barbican-keystone-listener-684c84b858-zh69p\" (UID: \"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c\") " pod="openstack/barbican-keystone-listener-684c84b858-zh69p" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.361957 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-logs\") pod \"barbican-keystone-listener-684c84b858-zh69p\" (UID: \"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c\") " pod="openstack/barbican-keystone-listener-684c84b858-zh69p" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.361995 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-config-data\") pod \"barbican-worker-5d98494dc7-ncwkj\" (UID: \"6268a0a7-fb2d-437c-9a9a-3003a640f5b6\") " pod="openstack/barbican-worker-5d98494dc7-ncwkj" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.362026 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-config-data-custom\") pod \"barbican-worker-5d98494dc7-ncwkj\" (UID: \"6268a0a7-fb2d-437c-9a9a-3003a640f5b6\") " pod="openstack/barbican-worker-5d98494dc7-ncwkj" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.362072 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-combined-ca-bundle\") pod \"barbican-keystone-listener-684c84b858-zh69p\" (UID: \"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c\") " pod="openstack/barbican-keystone-listener-684c84b858-zh69p" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.362093 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wh8j\" (UniqueName: \"kubernetes.io/projected/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-kube-api-access-9wh8j\") pod \"barbican-worker-5d98494dc7-ncwkj\" (UID: \"6268a0a7-fb2d-437c-9a9a-3003a640f5b6\") " pod="openstack/barbican-worker-5d98494dc7-ncwkj" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.362123 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-logs\") pod \"barbican-worker-5d98494dc7-ncwkj\" (UID: \"6268a0a7-fb2d-437c-9a9a-3003a640f5b6\") " pod="openstack/barbican-worker-5d98494dc7-ncwkj" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.362137 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-combined-ca-bundle\") pod \"barbican-worker-5d98494dc7-ncwkj\" (UID: \"6268a0a7-fb2d-437c-9a9a-3003a640f5b6\") " pod="openstack/barbican-worker-5d98494dc7-ncwkj" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.369758 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-config-data-custom\") pod \"barbican-keystone-listener-684c84b858-zh69p\" (UID: \"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c\") " pod="openstack/barbican-keystone-listener-684c84b858-zh69p" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.370184 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-logs\") pod \"barbican-keystone-listener-684c84b858-zh69p\" (UID: \"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c\") " pod="openstack/barbican-keystone-listener-684c84b858-zh69p" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.377289 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-combined-ca-bundle\") pod \"barbican-worker-5d98494dc7-ncwkj\" (UID: \"6268a0a7-fb2d-437c-9a9a-3003a640f5b6\") " pod="openstack/barbican-worker-5d98494dc7-ncwkj" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.378832 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-logs\") pod \"barbican-worker-5d98494dc7-ncwkj\" (UID: \"6268a0a7-fb2d-437c-9a9a-3003a640f5b6\") " pod="openstack/barbican-worker-5d98494dc7-ncwkj" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.387047 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-config-data-custom\") pod \"barbican-worker-5d98494dc7-ncwkj\" (UID: \"6268a0a7-fb2d-437c-9a9a-3003a640f5b6\") " pod="openstack/barbican-worker-5d98494dc7-ncwkj" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.387643 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-config-data\") pod \"barbican-worker-5d98494dc7-ncwkj\" (UID: \"6268a0a7-fb2d-437c-9a9a-3003a640f5b6\") " pod="openstack/barbican-worker-5d98494dc7-ncwkj" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.387979 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-config-data\") pod \"barbican-keystone-listener-684c84b858-zh69p\" (UID: \"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c\") " pod="openstack/barbican-keystone-listener-684c84b858-zh69p" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.403257 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-combined-ca-bundle\") pod \"barbican-keystone-listener-684c84b858-zh69p\" (UID: \"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c\") " pod="openstack/barbican-keystone-listener-684c84b858-zh69p" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.409570 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcwmc\" (UniqueName: \"kubernetes.io/projected/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-kube-api-access-xcwmc\") pod \"barbican-keystone-listener-684c84b858-zh69p\" (UID: \"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c\") " pod="openstack/barbican-keystone-listener-684c84b858-zh69p" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.409854 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wh8j\" (UniqueName: \"kubernetes.io/projected/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-kube-api-access-9wh8j\") pod \"barbican-worker-5d98494dc7-ncwkj\" (UID: \"6268a0a7-fb2d-437c-9a9a-3003a640f5b6\") " pod="openstack/barbican-worker-5d98494dc7-ncwkj" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.416351 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.429686 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-564596946d-g2qdq" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.457259 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-657d9bcf46-vrpmh"] Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.458968 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-657d9bcf46-vrpmh" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.464800 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-282gl\" (UniqueName: \"kubernetes.io/projected/215e6dbb-5ebf-446a-8326-1e96d37a38c3-kube-api-access-282gl\") pod \"dnsmasq-dns-85ff748b95-qtmhw\" (UID: \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\") " pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.464855 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-qtmhw\" (UID: \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\") " pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.466176 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-config\") pod \"dnsmasq-dns-85ff748b95-qtmhw\" (UID: \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\") " pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.466688 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-qtmhw\" (UID: \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\") " pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.466757 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-dns-svc\") pod \"dnsmasq-dns-85ff748b95-qtmhw\" (UID: \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\") " pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.466824 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-qtmhw\" (UID: \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\") " pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.479863 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.502394 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-657d9bcf46-vrpmh"] Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.529059 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5d98494dc7-ncwkj" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.549845 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-74dd7b5ff9-wg9dt"] Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.551320 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-74dd7b5ff9-wg9dt" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.567091 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-684c84b858-zh69p" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.568345 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/283581c7-0894-47b6-b933-f36c54f50e4b-logs\") pod \"barbican-api-657d9bcf46-vrpmh\" (UID: \"283581c7-0894-47b6-b933-f36c54f50e4b\") " pod="openstack/barbican-api-657d9bcf46-vrpmh" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.574713 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-666bf74cdd-hjbwv"] Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.576321 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.580674 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/283581c7-0894-47b6-b933-f36c54f50e4b-config-data\") pod \"barbican-api-657d9bcf46-vrpmh\" (UID: \"283581c7-0894-47b6-b933-f36c54f50e4b\") " pod="openstack/barbican-api-657d9bcf46-vrpmh" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.580804 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwbbh\" (UniqueName: \"kubernetes.io/projected/283581c7-0894-47b6-b933-f36c54f50e4b-kube-api-access-wwbbh\") pod \"barbican-api-657d9bcf46-vrpmh\" (UID: \"283581c7-0894-47b6-b933-f36c54f50e4b\") " pod="openstack/barbican-api-657d9bcf46-vrpmh" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.580855 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-qtmhw\" (UID: \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\") " pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.580919 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-dns-svc\") pod \"dnsmasq-dns-85ff748b95-qtmhw\" (UID: \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\") " pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.581007 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-qtmhw\" (UID: \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\") " pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.581023 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283581c7-0894-47b6-b933-f36c54f50e4b-combined-ca-bundle\") pod \"barbican-api-657d9bcf46-vrpmh\" (UID: \"283581c7-0894-47b6-b933-f36c54f50e4b\") " pod="openstack/barbican-api-657d9bcf46-vrpmh" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.581105 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-282gl\" (UniqueName: \"kubernetes.io/projected/215e6dbb-5ebf-446a-8326-1e96d37a38c3-kube-api-access-282gl\") pod \"dnsmasq-dns-85ff748b95-qtmhw\" (UID: \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\") " pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.581175 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-qtmhw\" (UID: \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\") " pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.581218 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/283581c7-0894-47b6-b933-f36c54f50e4b-config-data-custom\") pod \"barbican-api-657d9bcf46-vrpmh\" (UID: \"283581c7-0894-47b6-b933-f36c54f50e4b\") " pod="openstack/barbican-api-657d9bcf46-vrpmh" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.581250 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-config\") pod \"dnsmasq-dns-85ff748b95-qtmhw\" (UID: \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\") " pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.582128 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-config\") pod \"dnsmasq-dns-85ff748b95-qtmhw\" (UID: \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\") " pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.582674 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-qtmhw\" (UID: \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\") " pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.583138 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-dns-svc\") pod \"dnsmasq-dns-85ff748b95-qtmhw\" (UID: \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\") " pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.583664 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-qtmhw\" (UID: \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\") " pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.584151 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-qtmhw\" (UID: \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\") " pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.604158 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-74dd7b5ff9-wg9dt"] Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.639255 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-282gl\" (UniqueName: \"kubernetes.io/projected/215e6dbb-5ebf-446a-8326-1e96d37a38c3-kube-api-access-282gl\") pod \"dnsmasq-dns-85ff748b95-qtmhw\" (UID: \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\") " pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.649734 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6b45d5d658-tw8nb"] Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.656677 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b45d5d658-tw8nb" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.673559 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-666bf74cdd-hjbwv"] Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.689130 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08bb5fcc-79c7-4733-a26a-192b9b9fa955-public-tls-certs\") pod \"placement-666bf74cdd-hjbwv\" (UID: \"08bb5fcc-79c7-4733-a26a-192b9b9fa955\") " pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.689174 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54swq\" (UniqueName: \"kubernetes.io/projected/08bb5fcc-79c7-4733-a26a-192b9b9fa955-kube-api-access-54swq\") pod \"placement-666bf74cdd-hjbwv\" (UID: \"08bb5fcc-79c7-4733-a26a-192b9b9fa955\") " pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.689196 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08bb5fcc-79c7-4733-a26a-192b9b9fa955-internal-tls-certs\") pod \"placement-666bf74cdd-hjbwv\" (UID: \"08bb5fcc-79c7-4733-a26a-192b9b9fa955\") " pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.689212 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f21330fb-32fb-43a6-afdb-9337c060f960-logs\") pod \"barbican-worker-74dd7b5ff9-wg9dt\" (UID: \"f21330fb-32fb-43a6-afdb-9337c060f960\") " pod="openstack/barbican-worker-74dd7b5ff9-wg9dt" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.689231 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08bb5fcc-79c7-4733-a26a-192b9b9fa955-logs\") pod \"placement-666bf74cdd-hjbwv\" (UID: \"08bb5fcc-79c7-4733-a26a-192b9b9fa955\") " pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.689281 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/283581c7-0894-47b6-b933-f36c54f50e4b-config-data-custom\") pod \"barbican-api-657d9bcf46-vrpmh\" (UID: \"283581c7-0894-47b6-b933-f36c54f50e4b\") " pod="openstack/barbican-api-657d9bcf46-vrpmh" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.689298 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08bb5fcc-79c7-4733-a26a-192b9b9fa955-scripts\") pod \"placement-666bf74cdd-hjbwv\" (UID: \"08bb5fcc-79c7-4733-a26a-192b9b9fa955\") " pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.689327 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f21330fb-32fb-43a6-afdb-9337c060f960-config-data-custom\") pod \"barbican-worker-74dd7b5ff9-wg9dt\" (UID: \"f21330fb-32fb-43a6-afdb-9337c060f960\") " pod="openstack/barbican-worker-74dd7b5ff9-wg9dt" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.689352 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vv7g\" (UniqueName: \"kubernetes.io/projected/f21330fb-32fb-43a6-afdb-9337c060f960-kube-api-access-8vv7g\") pod \"barbican-worker-74dd7b5ff9-wg9dt\" (UID: \"f21330fb-32fb-43a6-afdb-9337c060f960\") " pod="openstack/barbican-worker-74dd7b5ff9-wg9dt" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.689369 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/283581c7-0894-47b6-b933-f36c54f50e4b-logs\") pod \"barbican-api-657d9bcf46-vrpmh\" (UID: \"283581c7-0894-47b6-b933-f36c54f50e4b\") " pod="openstack/barbican-api-657d9bcf46-vrpmh" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.689393 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/283581c7-0894-47b6-b933-f36c54f50e4b-config-data\") pod \"barbican-api-657d9bcf46-vrpmh\" (UID: \"283581c7-0894-47b6-b933-f36c54f50e4b\") " pod="openstack/barbican-api-657d9bcf46-vrpmh" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.689427 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08bb5fcc-79c7-4733-a26a-192b9b9fa955-config-data\") pod \"placement-666bf74cdd-hjbwv\" (UID: \"08bb5fcc-79c7-4733-a26a-192b9b9fa955\") " pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.689447 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21330fb-32fb-43a6-afdb-9337c060f960-combined-ca-bundle\") pod \"barbican-worker-74dd7b5ff9-wg9dt\" (UID: \"f21330fb-32fb-43a6-afdb-9337c060f960\") " pod="openstack/barbican-worker-74dd7b5ff9-wg9dt" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.689465 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwbbh\" (UniqueName: \"kubernetes.io/projected/283581c7-0894-47b6-b933-f36c54f50e4b-kube-api-access-wwbbh\") pod \"barbican-api-657d9bcf46-vrpmh\" (UID: \"283581c7-0894-47b6-b933-f36c54f50e4b\") " pod="openstack/barbican-api-657d9bcf46-vrpmh" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.689523 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08bb5fcc-79c7-4733-a26a-192b9b9fa955-combined-ca-bundle\") pod \"placement-666bf74cdd-hjbwv\" (UID: \"08bb5fcc-79c7-4733-a26a-192b9b9fa955\") " pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.689574 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21330fb-32fb-43a6-afdb-9337c060f960-config-data\") pod \"barbican-worker-74dd7b5ff9-wg9dt\" (UID: \"f21330fb-32fb-43a6-afdb-9337c060f960\") " pod="openstack/barbican-worker-74dd7b5ff9-wg9dt" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.689640 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283581c7-0894-47b6-b933-f36c54f50e4b-combined-ca-bundle\") pod \"barbican-api-657d9bcf46-vrpmh\" (UID: \"283581c7-0894-47b6-b933-f36c54f50e4b\") " pod="openstack/barbican-api-657d9bcf46-vrpmh" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.690145 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/283581c7-0894-47b6-b933-f36c54f50e4b-logs\") pod \"barbican-api-657d9bcf46-vrpmh\" (UID: \"283581c7-0894-47b6-b933-f36c54f50e4b\") " pod="openstack/barbican-api-657d9bcf46-vrpmh" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.694114 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283581c7-0894-47b6-b933-f36c54f50e4b-combined-ca-bundle\") pod \"barbican-api-657d9bcf46-vrpmh\" (UID: \"283581c7-0894-47b6-b933-f36c54f50e4b\") " pod="openstack/barbican-api-657d9bcf46-vrpmh" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.699037 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/283581c7-0894-47b6-b933-f36c54f50e4b-config-data-custom\") pod \"barbican-api-657d9bcf46-vrpmh\" (UID: \"283581c7-0894-47b6-b933-f36c54f50e4b\") " pod="openstack/barbican-api-657d9bcf46-vrpmh" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.703530 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/283581c7-0894-47b6-b933-f36c54f50e4b-config-data\") pod \"barbican-api-657d9bcf46-vrpmh\" (UID: \"283581c7-0894-47b6-b933-f36c54f50e4b\") " pod="openstack/barbican-api-657d9bcf46-vrpmh" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.776751 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b45d5d658-tw8nb"] Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.812977 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ksgc\" (UniqueName: \"kubernetes.io/projected/cb794842-ad8f-4c9f-886b-b96df4bf5e5e-kube-api-access-9ksgc\") pod \"barbican-keystone-listener-6b45d5d658-tw8nb\" (UID: \"cb794842-ad8f-4c9f-886b-b96df4bf5e5e\") " pod="openstack/barbican-keystone-listener-6b45d5d658-tw8nb" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.813060 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21330fb-32fb-43a6-afdb-9337c060f960-config-data\") pod \"barbican-worker-74dd7b5ff9-wg9dt\" (UID: \"f21330fb-32fb-43a6-afdb-9337c060f960\") " pod="openstack/barbican-worker-74dd7b5ff9-wg9dt" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.813438 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb794842-ad8f-4c9f-886b-b96df4bf5e5e-config-data-custom\") pod \"barbican-keystone-listener-6b45d5d658-tw8nb\" (UID: \"cb794842-ad8f-4c9f-886b-b96df4bf5e5e\") " pod="openstack/barbican-keystone-listener-6b45d5d658-tw8nb" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.813508 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb794842-ad8f-4c9f-886b-b96df4bf5e5e-logs\") pod \"barbican-keystone-listener-6b45d5d658-tw8nb\" (UID: \"cb794842-ad8f-4c9f-886b-b96df4bf5e5e\") " pod="openstack/barbican-keystone-listener-6b45d5d658-tw8nb" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.819922 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08bb5fcc-79c7-4733-a26a-192b9b9fa955-public-tls-certs\") pod \"placement-666bf74cdd-hjbwv\" (UID: \"08bb5fcc-79c7-4733-a26a-192b9b9fa955\") " pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.820016 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54swq\" (UniqueName: \"kubernetes.io/projected/08bb5fcc-79c7-4733-a26a-192b9b9fa955-kube-api-access-54swq\") pod \"placement-666bf74cdd-hjbwv\" (UID: \"08bb5fcc-79c7-4733-a26a-192b9b9fa955\") " pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.820049 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08bb5fcc-79c7-4733-a26a-192b9b9fa955-internal-tls-certs\") pod \"placement-666bf74cdd-hjbwv\" (UID: \"08bb5fcc-79c7-4733-a26a-192b9b9fa955\") " pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.820123 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f21330fb-32fb-43a6-afdb-9337c060f960-logs\") pod \"barbican-worker-74dd7b5ff9-wg9dt\" (UID: \"f21330fb-32fb-43a6-afdb-9337c060f960\") " pod="openstack/barbican-worker-74dd7b5ff9-wg9dt" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.820155 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08bb5fcc-79c7-4733-a26a-192b9b9fa955-logs\") pod \"placement-666bf74cdd-hjbwv\" (UID: \"08bb5fcc-79c7-4733-a26a-192b9b9fa955\") " pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.820343 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb794842-ad8f-4c9f-886b-b96df4bf5e5e-combined-ca-bundle\") pod \"barbican-keystone-listener-6b45d5d658-tw8nb\" (UID: \"cb794842-ad8f-4c9f-886b-b96df4bf5e5e\") " pod="openstack/barbican-keystone-listener-6b45d5d658-tw8nb" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.820439 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08bb5fcc-79c7-4733-a26a-192b9b9fa955-scripts\") pod \"placement-666bf74cdd-hjbwv\" (UID: \"08bb5fcc-79c7-4733-a26a-192b9b9fa955\") " pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.820553 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f21330fb-32fb-43a6-afdb-9337c060f960-config-data-custom\") pod \"barbican-worker-74dd7b5ff9-wg9dt\" (UID: \"f21330fb-32fb-43a6-afdb-9337c060f960\") " pod="openstack/barbican-worker-74dd7b5ff9-wg9dt" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.820608 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vv7g\" (UniqueName: \"kubernetes.io/projected/f21330fb-32fb-43a6-afdb-9337c060f960-kube-api-access-8vv7g\") pod \"barbican-worker-74dd7b5ff9-wg9dt\" (UID: \"f21330fb-32fb-43a6-afdb-9337c060f960\") " pod="openstack/barbican-worker-74dd7b5ff9-wg9dt" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.822657 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08bb5fcc-79c7-4733-a26a-192b9b9fa955-config-data\") pod \"placement-666bf74cdd-hjbwv\" (UID: \"08bb5fcc-79c7-4733-a26a-192b9b9fa955\") " pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.822718 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21330fb-32fb-43a6-afdb-9337c060f960-combined-ca-bundle\") pod \"barbican-worker-74dd7b5ff9-wg9dt\" (UID: \"f21330fb-32fb-43a6-afdb-9337c060f960\") " pod="openstack/barbican-worker-74dd7b5ff9-wg9dt" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.823913 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb794842-ad8f-4c9f-886b-b96df4bf5e5e-config-data\") pod \"barbican-keystone-listener-6b45d5d658-tw8nb\" (UID: \"cb794842-ad8f-4c9f-886b-b96df4bf5e5e\") " pod="openstack/barbican-keystone-listener-6b45d5d658-tw8nb" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.823997 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08bb5fcc-79c7-4733-a26a-192b9b9fa955-combined-ca-bundle\") pod \"placement-666bf74cdd-hjbwv\" (UID: \"08bb5fcc-79c7-4733-a26a-192b9b9fa955\") " pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.824241 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f21330fb-32fb-43a6-afdb-9337c060f960-logs\") pod \"barbican-worker-74dd7b5ff9-wg9dt\" (UID: \"f21330fb-32fb-43a6-afdb-9337c060f960\") " pod="openstack/barbican-worker-74dd7b5ff9-wg9dt" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.842579 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08bb5fcc-79c7-4733-a26a-192b9b9fa955-combined-ca-bundle\") pod \"placement-666bf74cdd-hjbwv\" (UID: \"08bb5fcc-79c7-4733-a26a-192b9b9fa955\") " pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.870855 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08bb5fcc-79c7-4733-a26a-192b9b9fa955-scripts\") pod \"placement-666bf74cdd-hjbwv\" (UID: \"08bb5fcc-79c7-4733-a26a-192b9b9fa955\") " pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.881006 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwbbh\" (UniqueName: \"kubernetes.io/projected/283581c7-0894-47b6-b933-f36c54f50e4b-kube-api-access-wwbbh\") pod \"barbican-api-657d9bcf46-vrpmh\" (UID: \"283581c7-0894-47b6-b933-f36c54f50e4b\") " pod="openstack/barbican-api-657d9bcf46-vrpmh" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.881424 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08bb5fcc-79c7-4733-a26a-192b9b9fa955-config-data\") pod \"placement-666bf74cdd-hjbwv\" (UID: \"08bb5fcc-79c7-4733-a26a-192b9b9fa955\") " pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.881933 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08bb5fcc-79c7-4733-a26a-192b9b9fa955-public-tls-certs\") pod \"placement-666bf74cdd-hjbwv\" (UID: \"08bb5fcc-79c7-4733-a26a-192b9b9fa955\") " pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.882868 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08bb5fcc-79c7-4733-a26a-192b9b9fa955-logs\") pod \"placement-666bf74cdd-hjbwv\" (UID: \"08bb5fcc-79c7-4733-a26a-192b9b9fa955\") " pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.883137 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21330fb-32fb-43a6-afdb-9337c060f960-combined-ca-bundle\") pod \"barbican-worker-74dd7b5ff9-wg9dt\" (UID: \"f21330fb-32fb-43a6-afdb-9337c060f960\") " pod="openstack/barbican-worker-74dd7b5ff9-wg9dt" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.893165 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vv7g\" (UniqueName: \"kubernetes.io/projected/f21330fb-32fb-43a6-afdb-9337c060f960-kube-api-access-8vv7g\") pod \"barbican-worker-74dd7b5ff9-wg9dt\" (UID: \"f21330fb-32fb-43a6-afdb-9337c060f960\") " pod="openstack/barbican-worker-74dd7b5ff9-wg9dt" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.903546 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21330fb-32fb-43a6-afdb-9337c060f960-config-data\") pod \"barbican-worker-74dd7b5ff9-wg9dt\" (UID: \"f21330fb-32fb-43a6-afdb-9337c060f960\") " pod="openstack/barbican-worker-74dd7b5ff9-wg9dt" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.907659 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.911192 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54swq\" (UniqueName: \"kubernetes.io/projected/08bb5fcc-79c7-4733-a26a-192b9b9fa955-kube-api-access-54swq\") pod \"placement-666bf74cdd-hjbwv\" (UID: \"08bb5fcc-79c7-4733-a26a-192b9b9fa955\") " pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.923736 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f21330fb-32fb-43a6-afdb-9337c060f960-config-data-custom\") pod \"barbican-worker-74dd7b5ff9-wg9dt\" (UID: \"f21330fb-32fb-43a6-afdb-9337c060f960\") " pod="openstack/barbican-worker-74dd7b5ff9-wg9dt" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.924551 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-74dd7b5ff9-wg9dt" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.924970 4858 generic.go:334] "Generic (PLEG): container finished" podID="1c2489a7-5053-411a-9df6-8d6a659a36e2" containerID="344900eec47cc44ef1c3ed3261f039350fe922448c79e1530497b1ad9c8c070e" exitCode=0 Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.924996 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" event={"ID":"1c2489a7-5053-411a-9df6-8d6a659a36e2","Type":"ContainerDied","Data":"344900eec47cc44ef1c3ed3261f039350fe922448c79e1530497b1ad9c8c070e"} Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.925974 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ksgc\" (UniqueName: \"kubernetes.io/projected/cb794842-ad8f-4c9f-886b-b96df4bf5e5e-kube-api-access-9ksgc\") pod \"barbican-keystone-listener-6b45d5d658-tw8nb\" (UID: \"cb794842-ad8f-4c9f-886b-b96df4bf5e5e\") " pod="openstack/barbican-keystone-listener-6b45d5d658-tw8nb" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.926015 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb794842-ad8f-4c9f-886b-b96df4bf5e5e-config-data-custom\") pod \"barbican-keystone-listener-6b45d5d658-tw8nb\" (UID: \"cb794842-ad8f-4c9f-886b-b96df4bf5e5e\") " pod="openstack/barbican-keystone-listener-6b45d5d658-tw8nb" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.926038 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb794842-ad8f-4c9f-886b-b96df4bf5e5e-logs\") pod \"barbican-keystone-listener-6b45d5d658-tw8nb\" (UID: \"cb794842-ad8f-4c9f-886b-b96df4bf5e5e\") " pod="openstack/barbican-keystone-listener-6b45d5d658-tw8nb" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.926104 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb794842-ad8f-4c9f-886b-b96df4bf5e5e-combined-ca-bundle\") pod \"barbican-keystone-listener-6b45d5d658-tw8nb\" (UID: \"cb794842-ad8f-4c9f-886b-b96df4bf5e5e\") " pod="openstack/barbican-keystone-listener-6b45d5d658-tw8nb" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.926178 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb794842-ad8f-4c9f-886b-b96df4bf5e5e-config-data\") pod \"barbican-keystone-listener-6b45d5d658-tw8nb\" (UID: \"cb794842-ad8f-4c9f-886b-b96df4bf5e5e\") " pod="openstack/barbican-keystone-listener-6b45d5d658-tw8nb" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.935721 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-59c64b4d54-p2c7q"] Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.938655 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59c64b4d54-p2c7q" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.941950 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb794842-ad8f-4c9f-886b-b96df4bf5e5e-logs\") pod \"barbican-keystone-listener-6b45d5d658-tw8nb\" (UID: \"cb794842-ad8f-4c9f-886b-b96df4bf5e5e\") " pod="openstack/barbican-keystone-listener-6b45d5d658-tw8nb" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.946550 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb794842-ad8f-4c9f-886b-b96df4bf5e5e-config-data-custom\") pod \"barbican-keystone-listener-6b45d5d658-tw8nb\" (UID: \"cb794842-ad8f-4c9f-886b-b96df4bf5e5e\") " pod="openstack/barbican-keystone-listener-6b45d5d658-tw8nb" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.950617 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08bb5fcc-79c7-4733-a26a-192b9b9fa955-internal-tls-certs\") pod \"placement-666bf74cdd-hjbwv\" (UID: \"08bb5fcc-79c7-4733-a26a-192b9b9fa955\") " pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.953984 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb794842-ad8f-4c9f-886b-b96df4bf5e5e-config-data\") pod \"barbican-keystone-listener-6b45d5d658-tw8nb\" (UID: \"cb794842-ad8f-4c9f-886b-b96df4bf5e5e\") " pod="openstack/barbican-keystone-listener-6b45d5d658-tw8nb" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.972368 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb794842-ad8f-4c9f-886b-b96df4bf5e5e-combined-ca-bundle\") pod \"barbican-keystone-listener-6b45d5d658-tw8nb\" (UID: \"cb794842-ad8f-4c9f-886b-b96df4bf5e5e\") " pod="openstack/barbican-keystone-listener-6b45d5d658-tw8nb" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.986510 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.989020 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-657d9bcf46-vrpmh" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.999563 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ksgc\" (UniqueName: \"kubernetes.io/projected/cb794842-ad8f-4c9f-886b-b96df4bf5e5e-kube-api-access-9ksgc\") pod \"barbican-keystone-listener-6b45d5d658-tw8nb\" (UID: \"cb794842-ad8f-4c9f-886b-b96df4bf5e5e\") " pod="openstack/barbican-keystone-listener-6b45d5d658-tw8nb" Feb 18 00:53:36 crc kubenswrapper[4858]: I0218 00:53:36.999629 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59c64b4d54-p2c7q"] Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.004344 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b45d5d658-tw8nb" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.031901 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv589\" (UniqueName: \"kubernetes.io/projected/596c985d-60ec-43fa-aeb2-73b10d64d750-kube-api-access-kv589\") pod \"barbican-api-59c64b4d54-p2c7q\" (UID: \"596c985d-60ec-43fa-aeb2-73b10d64d750\") " pod="openstack/barbican-api-59c64b4d54-p2c7q" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.032197 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/596c985d-60ec-43fa-aeb2-73b10d64d750-logs\") pod \"barbican-api-59c64b4d54-p2c7q\" (UID: \"596c985d-60ec-43fa-aeb2-73b10d64d750\") " pod="openstack/barbican-api-59c64b4d54-p2c7q" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.032234 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/596c985d-60ec-43fa-aeb2-73b10d64d750-config-data-custom\") pod \"barbican-api-59c64b4d54-p2c7q\" (UID: \"596c985d-60ec-43fa-aeb2-73b10d64d750\") " pod="openstack/barbican-api-59c64b4d54-p2c7q" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.032339 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/596c985d-60ec-43fa-aeb2-73b10d64d750-config-data\") pod \"barbican-api-59c64b4d54-p2c7q\" (UID: \"596c985d-60ec-43fa-aeb2-73b10d64d750\") " pod="openstack/barbican-api-59c64b4d54-p2c7q" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.032485 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/596c985d-60ec-43fa-aeb2-73b10d64d750-combined-ca-bundle\") pod \"barbican-api-59c64b4d54-p2c7q\" (UID: \"596c985d-60ec-43fa-aeb2-73b10d64d750\") " pod="openstack/barbican-api-59c64b4d54-p2c7q" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.140615 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/596c985d-60ec-43fa-aeb2-73b10d64d750-combined-ca-bundle\") pod \"barbican-api-59c64b4d54-p2c7q\" (UID: \"596c985d-60ec-43fa-aeb2-73b10d64d750\") " pod="openstack/barbican-api-59c64b4d54-p2c7q" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.143645 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv589\" (UniqueName: \"kubernetes.io/projected/596c985d-60ec-43fa-aeb2-73b10d64d750-kube-api-access-kv589\") pod \"barbican-api-59c64b4d54-p2c7q\" (UID: \"596c985d-60ec-43fa-aeb2-73b10d64d750\") " pod="openstack/barbican-api-59c64b4d54-p2c7q" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.143716 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/596c985d-60ec-43fa-aeb2-73b10d64d750-logs\") pod \"barbican-api-59c64b4d54-p2c7q\" (UID: \"596c985d-60ec-43fa-aeb2-73b10d64d750\") " pod="openstack/barbican-api-59c64b4d54-p2c7q" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.143761 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/596c985d-60ec-43fa-aeb2-73b10d64d750-config-data-custom\") pod \"barbican-api-59c64b4d54-p2c7q\" (UID: \"596c985d-60ec-43fa-aeb2-73b10d64d750\") " pod="openstack/barbican-api-59c64b4d54-p2c7q" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.143922 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/596c985d-60ec-43fa-aeb2-73b10d64d750-config-data\") pod \"barbican-api-59c64b4d54-p2c7q\" (UID: \"596c985d-60ec-43fa-aeb2-73b10d64d750\") " pod="openstack/barbican-api-59c64b4d54-p2c7q" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.144771 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/596c985d-60ec-43fa-aeb2-73b10d64d750-logs\") pod \"barbican-api-59c64b4d54-p2c7q\" (UID: \"596c985d-60ec-43fa-aeb2-73b10d64d750\") " pod="openstack/barbican-api-59c64b4d54-p2c7q" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.154577 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/596c985d-60ec-43fa-aeb2-73b10d64d750-combined-ca-bundle\") pod \"barbican-api-59c64b4d54-p2c7q\" (UID: \"596c985d-60ec-43fa-aeb2-73b10d64d750\") " pod="openstack/barbican-api-59c64b4d54-p2c7q" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.163245 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/596c985d-60ec-43fa-aeb2-73b10d64d750-config-data\") pod \"barbican-api-59c64b4d54-p2c7q\" (UID: \"596c985d-60ec-43fa-aeb2-73b10d64d750\") " pod="openstack/barbican-api-59c64b4d54-p2c7q" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.163480 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv589\" (UniqueName: \"kubernetes.io/projected/596c985d-60ec-43fa-aeb2-73b10d64d750-kube-api-access-kv589\") pod \"barbican-api-59c64b4d54-p2c7q\" (UID: \"596c985d-60ec-43fa-aeb2-73b10d64d750\") " pod="openstack/barbican-api-59c64b4d54-p2c7q" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.165389 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/596c985d-60ec-43fa-aeb2-73b10d64d750-config-data-custom\") pod \"barbican-api-59c64b4d54-p2c7q\" (UID: \"596c985d-60ec-43fa-aeb2-73b10d64d750\") " pod="openstack/barbican-api-59c64b4d54-p2c7q" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.191935 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.248187 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-dns-swift-storage-0\") pod \"1c2489a7-5053-411a-9df6-8d6a659a36e2\" (UID: \"1c2489a7-5053-411a-9df6-8d6a659a36e2\") " Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.248259 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-dns-svc\") pod \"1c2489a7-5053-411a-9df6-8d6a659a36e2\" (UID: \"1c2489a7-5053-411a-9df6-8d6a659a36e2\") " Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.248382 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qljdd\" (UniqueName: \"kubernetes.io/projected/1c2489a7-5053-411a-9df6-8d6a659a36e2-kube-api-access-qljdd\") pod \"1c2489a7-5053-411a-9df6-8d6a659a36e2\" (UID: \"1c2489a7-5053-411a-9df6-8d6a659a36e2\") " Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.248508 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-ovsdbserver-nb\") pod \"1c2489a7-5053-411a-9df6-8d6a659a36e2\" (UID: \"1c2489a7-5053-411a-9df6-8d6a659a36e2\") " Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.248586 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-config\") pod \"1c2489a7-5053-411a-9df6-8d6a659a36e2\" (UID: \"1c2489a7-5053-411a-9df6-8d6a659a36e2\") " Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.248627 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-ovsdbserver-sb\") pod \"1c2489a7-5053-411a-9df6-8d6a659a36e2\" (UID: \"1c2489a7-5053-411a-9df6-8d6a659a36e2\") " Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.269474 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c2489a7-5053-411a-9df6-8d6a659a36e2-kube-api-access-qljdd" (OuterVolumeSpecName: "kube-api-access-qljdd") pod "1c2489a7-5053-411a-9df6-8d6a659a36e2" (UID: "1c2489a7-5053-411a-9df6-8d6a659a36e2"). InnerVolumeSpecName "kube-api-access-qljdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.326794 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59c64b4d54-p2c7q" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.353789 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qljdd\" (UniqueName: \"kubernetes.io/projected/1c2489a7-5053-411a-9df6-8d6a659a36e2-kube-api-access-qljdd\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.428287 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1c2489a7-5053-411a-9df6-8d6a659a36e2" (UID: "1c2489a7-5053-411a-9df6-8d6a659a36e2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.428775 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-config" (OuterVolumeSpecName: "config") pod "1c2489a7-5053-411a-9df6-8d6a659a36e2" (UID: "1c2489a7-5053-411a-9df6-8d6a659a36e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.456856 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.456886 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.473856 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1c2489a7-5053-411a-9df6-8d6a659a36e2" (UID: "1c2489a7-5053-411a-9df6-8d6a659a36e2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.564623 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.590217 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1c2489a7-5053-411a-9df6-8d6a659a36e2" (UID: "1c2489a7-5053-411a-9df6-8d6a659a36e2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.603835 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5cf649f6f9-dtsbl"] Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.603866 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-564596946d-g2qdq"] Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.612945 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1c2489a7-5053-411a-9df6-8d6a659a36e2" (UID: "1c2489a7-5053-411a-9df6-8d6a659a36e2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.669899 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.669944 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c2489a7-5053-411a-9df6-8d6a659a36e2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:37 crc kubenswrapper[4858]: W0218 00:53:37.674660 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod245419e7_d61b_4f15_acef_861d6025e566.slice/crio-f2761e1eb357780e2e98285fa77664e075491de1f67e657b2ec1a5b35e23be07 WatchSource:0}: Error finding container f2761e1eb357780e2e98285fa77664e075491de1f67e657b2ec1a5b35e23be07: Status 404 returned error can't find the container with id f2761e1eb357780e2e98285fa77664e075491de1f67e657b2ec1a5b35e23be07 Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.942827 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-bpmww" event={"ID":"48a7b55c-92f4-41e7-b862-45eadd76013b","Type":"ContainerStarted","Data":"cd57ef83a6af653dfb5926b9290842f9dfada85b09ef09641fff73292c3f5a89"} Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.944316 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-564596946d-g2qdq" event={"ID":"245419e7-d61b-4f15-acef-861d6025e566","Type":"ContainerStarted","Data":"f2761e1eb357780e2e98285fa77664e075491de1f67e657b2ec1a5b35e23be07"} Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.947974 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.947967 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-bw4lt" event={"ID":"1c2489a7-5053-411a-9df6-8d6a659a36e2","Type":"ContainerDied","Data":"578799dc36a99317241c52b736df5470b1c27a284bb32c8fe3175ae4a41223e0"} Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.948126 4858 scope.go:117] "RemoveContainer" containerID="344900eec47cc44ef1c3ed3261f039350fe922448c79e1530497b1ad9c8c070e" Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.953038 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5cf649f6f9-dtsbl" event={"ID":"26a5ef88-d04d-4360-97b2-de3aab55c822","Type":"ContainerStarted","Data":"216b0bba39e66136ee2151f448730ad7f12c3497294bc7b0822083349547d215"} Feb 18 00:53:37 crc kubenswrapper[4858]: I0218 00:53:37.969376 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-bpmww" podStartSLOduration=2.905765123 podStartE2EDuration="52.969357594s" podCreationTimestamp="2026-02-18 00:52:45 +0000 UTC" firstStartedPulling="2026-02-18 00:52:46.904737022 +0000 UTC m=+1120.210573754" lastFinishedPulling="2026-02-18 00:53:36.968329493 +0000 UTC m=+1170.274166225" observedRunningTime="2026-02-18 00:53:37.958692937 +0000 UTC m=+1171.264529669" watchObservedRunningTime="2026-02-18 00:53:37.969357594 +0000 UTC m=+1171.275194326" Feb 18 00:53:38 crc kubenswrapper[4858]: I0218 00:53:38.011856 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5d98494dc7-ncwkj"] Feb 18 00:53:38 crc kubenswrapper[4858]: I0218 00:53:38.021892 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-74dd7b5ff9-wg9dt"] Feb 18 00:53:38 crc kubenswrapper[4858]: I0218 00:53:38.031395 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-684c84b858-zh69p"] Feb 18 00:53:38 crc kubenswrapper[4858]: I0218 00:53:38.041020 4858 scope.go:117] "RemoveContainer" containerID="af09bf3cae731673fe3f9b669d6f61d904f19d825eddfe38cb5b398727fbdcf8" Feb 18 00:53:38 crc kubenswrapper[4858]: I0218 00:53:38.041129 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-bw4lt"] Feb 18 00:53:38 crc kubenswrapper[4858]: I0218 00:53:38.053187 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-bw4lt"] Feb 18 00:53:38 crc kubenswrapper[4858]: W0218 00:53:38.056455 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0450ba6b_8a46_4e32_aebb_2021a7d0ff8c.slice/crio-b978d50cde38cad44007fcf7899a85be300ecec1c5a80e6a76451602de92b163 WatchSource:0}: Error finding container b978d50cde38cad44007fcf7899a85be300ecec1c5a80e6a76451602de92b163: Status 404 returned error can't find the container with id b978d50cde38cad44007fcf7899a85be300ecec1c5a80e6a76451602de92b163 Feb 18 00:53:38 crc kubenswrapper[4858]: W0218 00:53:38.076981 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6268a0a7_fb2d_437c_9a9a_3003a640f5b6.slice/crio-29a75fe9412d48cb1767ff98939a8834f088f21db33f8271c11aeb4380d48b82 WatchSource:0}: Error finding container 29a75fe9412d48cb1767ff98939a8834f088f21db33f8271c11aeb4380d48b82: Status 404 returned error can't find the container with id 29a75fe9412d48cb1767ff98939a8834f088f21db33f8271c11aeb4380d48b82 Feb 18 00:53:38 crc kubenswrapper[4858]: I0218 00:53:38.457632 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-qtmhw"] Feb 18 00:53:38 crc kubenswrapper[4858]: I0218 00:53:38.503648 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b45d5d658-tw8nb"] Feb 18 00:53:38 crc kubenswrapper[4858]: I0218 00:53:38.509210 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 00:53:38 crc kubenswrapper[4858]: I0218 00:53:38.509241 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 00:53:38 crc kubenswrapper[4858]: I0218 00:53:38.538136 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 00:53:38 crc kubenswrapper[4858]: I0218 00:53:38.538190 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-657d9bcf46-vrpmh"] Feb 18 00:53:38 crc kubenswrapper[4858]: I0218 00:53:38.538216 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 00:53:38 crc kubenswrapper[4858]: I0218 00:53:38.549258 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-666bf74cdd-hjbwv"] Feb 18 00:53:38 crc kubenswrapper[4858]: I0218 00:53:38.630706 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59c64b4d54-p2c7q"] Feb 18 00:53:38 crc kubenswrapper[4858]: I0218 00:53:38.643547 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 00:53:38 crc kubenswrapper[4858]: I0218 00:53:38.644701 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 00:53:38 crc kubenswrapper[4858]: I0218 00:53:38.644762 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 00:53:38 crc kubenswrapper[4858]: I0218 00:53:38.644835 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 00:53:38 crc kubenswrapper[4858]: I0218 00:53:38.976180 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-666bf74cdd-hjbwv" event={"ID":"08bb5fcc-79c7-4733-a26a-192b9b9fa955","Type":"ContainerStarted","Data":"d2cd74fc30a1ec5a0a1ce9db1cf4f767a324be17d26fe8953f1072df8dd92f98"} Feb 18 00:53:38 crc kubenswrapper[4858]: I0218 00:53:38.983976 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d98494dc7-ncwkj" event={"ID":"6268a0a7-fb2d-437c-9a9a-3003a640f5b6","Type":"ContainerStarted","Data":"29a75fe9412d48cb1767ff98939a8834f088f21db33f8271c11aeb4380d48b82"} Feb 18 00:53:38 crc kubenswrapper[4858]: I0218 00:53:38.993823 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-564596946d-g2qdq" event={"ID":"245419e7-d61b-4f15-acef-861d6025e566","Type":"ContainerStarted","Data":"fd458991cc74d4993b215b20b11975d3e1322a8d6c981e44aa8d7c3dfd75cbf3"} Feb 18 00:53:38 crc kubenswrapper[4858]: I0218 00:53:38.993863 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-564596946d-g2qdq" event={"ID":"245419e7-d61b-4f15-acef-861d6025e566","Type":"ContainerStarted","Data":"e9247ab0bf2a23a169361855ed550448d58ce164c047f2035caed89f549f22c7"} Feb 18 00:53:38 crc kubenswrapper[4858]: I0218 00:53:38.995057 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-564596946d-g2qdq" Feb 18 00:53:38 crc kubenswrapper[4858]: I0218 00:53:38.995082 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-564596946d-g2qdq" Feb 18 00:53:39 crc kubenswrapper[4858]: I0218 00:53:39.012388 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59c64b4d54-p2c7q" event={"ID":"596c985d-60ec-43fa-aeb2-73b10d64d750","Type":"ContainerStarted","Data":"b765cce33687f221917059d610fc58c653527a849226a5df77ed7e1c79cafd6e"} Feb 18 00:53:39 crc kubenswrapper[4858]: I0218 00:53:39.026646 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b45d5d658-tw8nb" event={"ID":"cb794842-ad8f-4c9f-886b-b96df4bf5e5e","Type":"ContainerStarted","Data":"8693c4a0b72d603e12dbe8f6d485a027b164e753eb597e909ce7fa711e998b1c"} Feb 18 00:53:39 crc kubenswrapper[4858]: I0218 00:53:39.028646 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" event={"ID":"215e6dbb-5ebf-446a-8326-1e96d37a38c3","Type":"ContainerStarted","Data":"9dab76244bfa260fef65d626021d59bac811d8f06f851541af86176a45805f19"} Feb 18 00:53:39 crc kubenswrapper[4858]: I0218 00:53:39.033969 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5cf649f6f9-dtsbl" event={"ID":"26a5ef88-d04d-4360-97b2-de3aab55c822","Type":"ContainerStarted","Data":"1c8666e5469ff7fc2c732c53c6553517fabf37ad84be17f0efd1b5da29989b59"} Feb 18 00:53:39 crc kubenswrapper[4858]: I0218 00:53:39.035815 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:53:39 crc kubenswrapper[4858]: I0218 00:53:39.037077 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-564596946d-g2qdq" podStartSLOduration=4.037066332 podStartE2EDuration="4.037066332s" podCreationTimestamp="2026-02-18 00:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:53:39.020438509 +0000 UTC m=+1172.326275241" watchObservedRunningTime="2026-02-18 00:53:39.037066332 +0000 UTC m=+1172.342903064" Feb 18 00:53:39 crc kubenswrapper[4858]: I0218 00:53:39.038435 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-74dd7b5ff9-wg9dt" event={"ID":"f21330fb-32fb-43a6-afdb-9337c060f960","Type":"ContainerStarted","Data":"2756ecd00d38a8119f55d9c39cdcc70c553cbcfcc5b31debc1846c1cb8f43303"} Feb 18 00:53:39 crc kubenswrapper[4858]: I0218 00:53:39.041193 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-684c84b858-zh69p" event={"ID":"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c","Type":"ContainerStarted","Data":"b978d50cde38cad44007fcf7899a85be300ecec1c5a80e6a76451602de92b163"} Feb 18 00:53:39 crc kubenswrapper[4858]: I0218 00:53:39.043637 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-657d9bcf46-vrpmh" event={"ID":"283581c7-0894-47b6-b933-f36c54f50e4b","Type":"ContainerStarted","Data":"181d030cc9d1105f15031919a41b3b1910a06865bedd5963842afb47904d9608"} Feb 18 00:53:39 crc kubenswrapper[4858]: I0218 00:53:39.044072 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 00:53:39 crc kubenswrapper[4858]: I0218 00:53:39.044245 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 00:53:39 crc kubenswrapper[4858]: I0218 00:53:39.044788 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 00:53:39 crc kubenswrapper[4858]: I0218 00:53:39.044815 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 00:53:39 crc kubenswrapper[4858]: I0218 00:53:39.066187 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5cf649f6f9-dtsbl" podStartSLOduration=4.066171057 podStartE2EDuration="4.066171057s" podCreationTimestamp="2026-02-18 00:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:53:39.059424603 +0000 UTC m=+1172.365261335" watchObservedRunningTime="2026-02-18 00:53:39.066171057 +0000 UTC m=+1172.372007779" Feb 18 00:53:39 crc kubenswrapper[4858]: I0218 00:53:39.441146 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c2489a7-5053-411a-9df6-8d6a659a36e2" path="/var/lib/kubelet/pods/1c2489a7-5053-411a-9df6-8d6a659a36e2/volumes" Feb 18 00:53:40 crc kubenswrapper[4858]: I0218 00:53:40.060511 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-666bf74cdd-hjbwv" event={"ID":"08bb5fcc-79c7-4733-a26a-192b9b9fa955","Type":"ContainerStarted","Data":"181d683ed57fabd4bd1cad8f2194ac283bcf4be6c78330c1e12eafd4b7ec6637"} Feb 18 00:53:40 crc kubenswrapper[4858]: I0218 00:53:40.084527 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59c64b4d54-p2c7q" event={"ID":"596c985d-60ec-43fa-aeb2-73b10d64d750","Type":"ContainerStarted","Data":"40696f6be4357f673b2b0b13b3225449512fb67966e2a0ab854e12008e50ef66"} Feb 18 00:53:40 crc kubenswrapper[4858]: I0218 00:53:40.095745 4858 generic.go:334] "Generic (PLEG): container finished" podID="215e6dbb-5ebf-446a-8326-1e96d37a38c3" containerID="2f47d115501edceea4ec9206ac64df077a75c19704c9691a9f468eb9a806289b" exitCode=0 Feb 18 00:53:40 crc kubenswrapper[4858]: I0218 00:53:40.095829 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" event={"ID":"215e6dbb-5ebf-446a-8326-1e96d37a38c3","Type":"ContainerDied","Data":"2f47d115501edceea4ec9206ac64df077a75c19704c9691a9f468eb9a806289b"} Feb 18 00:53:40 crc kubenswrapper[4858]: I0218 00:53:40.127234 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-657d9bcf46-vrpmh" event={"ID":"283581c7-0894-47b6-b933-f36c54f50e4b","Type":"ContainerStarted","Data":"4e882c9e97e0b550feb3ab78f0cdf7b99e372d900759076f0c2c8dd00c543301"} Feb 18 00:53:40 crc kubenswrapper[4858]: I0218 00:53:40.127270 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-657d9bcf46-vrpmh" event={"ID":"283581c7-0894-47b6-b933-f36c54f50e4b","Type":"ContainerStarted","Data":"1e29ff7118bc4c5d05e46e07bdeddc3ef4d7f1fd2a6e78502f9581f673f7d528"} Feb 18 00:53:40 crc kubenswrapper[4858]: I0218 00:53:40.136698 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-657d9bcf46-vrpmh" Feb 18 00:53:40 crc kubenswrapper[4858]: I0218 00:53:40.136739 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-657d9bcf46-vrpmh" Feb 18 00:53:40 crc kubenswrapper[4858]: I0218 00:53:40.225282 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-657d9bcf46-vrpmh" podStartSLOduration=4.225258187 podStartE2EDuration="4.225258187s" podCreationTimestamp="2026-02-18 00:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:53:40.167883287 +0000 UTC m=+1173.473720019" watchObservedRunningTime="2026-02-18 00:53:40.225258187 +0000 UTC m=+1173.531094919" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.164779 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" event={"ID":"215e6dbb-5ebf-446a-8326-1e96d37a38c3","Type":"ContainerStarted","Data":"ca87f25f8139a0c6b9d1c3980ef6cf6225fe3c7b9dea72017e89250425d0003c"} Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.166555 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.173863 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-666bf74cdd-hjbwv" event={"ID":"08bb5fcc-79c7-4733-a26a-192b9b9fa955","Type":"ContainerStarted","Data":"324c465797044605d48ccd8e54edbd09372771164caef2ae963f9c99afcfa5f9"} Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.174829 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.174856 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.180837 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59c64b4d54-p2c7q" event={"ID":"596c985d-60ec-43fa-aeb2-73b10d64d750","Type":"ContainerStarted","Data":"f7fdfc076705b801108265e258a96ddf3671e28d86bc3478f29480e67924a110"} Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.180885 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59c64b4d54-p2c7q" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.181595 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.181612 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.182159 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.182172 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.183450 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59c64b4d54-p2c7q" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.209288 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" podStartSLOduration=5.209269146 podStartE2EDuration="5.209269146s" podCreationTimestamp="2026-02-18 00:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:53:41.196970299 +0000 UTC m=+1174.502807031" watchObservedRunningTime="2026-02-18 00:53:41.209269146 +0000 UTC m=+1174.515105878" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.223640 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-666bf74cdd-hjbwv" podStartSLOduration=5.223623034 podStartE2EDuration="5.223623034s" podCreationTimestamp="2026-02-18 00:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:53:41.217812853 +0000 UTC m=+1174.523649585" watchObservedRunningTime="2026-02-18 00:53:41.223623034 +0000 UTC m=+1174.529459766" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.335283 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-59c64b4d54-p2c7q" podStartSLOduration=5.335262898 podStartE2EDuration="5.335262898s" podCreationTimestamp="2026-02-18 00:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:53:41.24578374 +0000 UTC m=+1174.551620472" watchObservedRunningTime="2026-02-18 00:53:41.335262898 +0000 UTC m=+1174.641099630" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.351290 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-59c64b4d54-p2c7q"] Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.379516 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-674dbc688d-knngw"] Feb 18 00:53:41 crc kubenswrapper[4858]: E0218 00:53:41.379965 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c2489a7-5053-411a-9df6-8d6a659a36e2" containerName="dnsmasq-dns" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.379981 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c2489a7-5053-411a-9df6-8d6a659a36e2" containerName="dnsmasq-dns" Feb 18 00:53:41 crc kubenswrapper[4858]: E0218 00:53:41.379995 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c2489a7-5053-411a-9df6-8d6a659a36e2" containerName="init" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.380001 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c2489a7-5053-411a-9df6-8d6a659a36e2" containerName="init" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.380179 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c2489a7-5053-411a-9df6-8d6a659a36e2" containerName="dnsmasq-dns" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.381230 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.383341 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.391773 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.402479 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-674dbc688d-knngw"] Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.488880 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f9ed2521-63c1-48e5-902a-7b92102c74bb-config-data-custom\") pod \"barbican-api-674dbc688d-knngw\" (UID: \"f9ed2521-63c1-48e5-902a-7b92102c74bb\") " pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.488927 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ed2521-63c1-48e5-902a-7b92102c74bb-config-data\") pod \"barbican-api-674dbc688d-knngw\" (UID: \"f9ed2521-63c1-48e5-902a-7b92102c74bb\") " pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.489004 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ed2521-63c1-48e5-902a-7b92102c74bb-logs\") pod \"barbican-api-674dbc688d-knngw\" (UID: \"f9ed2521-63c1-48e5-902a-7b92102c74bb\") " pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.489053 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ed2521-63c1-48e5-902a-7b92102c74bb-combined-ca-bundle\") pod \"barbican-api-674dbc688d-knngw\" (UID: \"f9ed2521-63c1-48e5-902a-7b92102c74bb\") " pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.489070 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ed2521-63c1-48e5-902a-7b92102c74bb-internal-tls-certs\") pod \"barbican-api-674dbc688d-knngw\" (UID: \"f9ed2521-63c1-48e5-902a-7b92102c74bb\") " pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.489087 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ed2521-63c1-48e5-902a-7b92102c74bb-public-tls-certs\") pod \"barbican-api-674dbc688d-knngw\" (UID: \"f9ed2521-63c1-48e5-902a-7b92102c74bb\") " pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.489146 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsbch\" (UniqueName: \"kubernetes.io/projected/f9ed2521-63c1-48e5-902a-7b92102c74bb-kube-api-access-jsbch\") pod \"barbican-api-674dbc688d-knngw\" (UID: \"f9ed2521-63c1-48e5-902a-7b92102c74bb\") " pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.590542 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f9ed2521-63c1-48e5-902a-7b92102c74bb-config-data-custom\") pod \"barbican-api-674dbc688d-knngw\" (UID: \"f9ed2521-63c1-48e5-902a-7b92102c74bb\") " pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.590581 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ed2521-63c1-48e5-902a-7b92102c74bb-config-data\") pod \"barbican-api-674dbc688d-knngw\" (UID: \"f9ed2521-63c1-48e5-902a-7b92102c74bb\") " pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.590625 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ed2521-63c1-48e5-902a-7b92102c74bb-logs\") pod \"barbican-api-674dbc688d-knngw\" (UID: \"f9ed2521-63c1-48e5-902a-7b92102c74bb\") " pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.590674 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ed2521-63c1-48e5-902a-7b92102c74bb-combined-ca-bundle\") pod \"barbican-api-674dbc688d-knngw\" (UID: \"f9ed2521-63c1-48e5-902a-7b92102c74bb\") " pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.590691 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ed2521-63c1-48e5-902a-7b92102c74bb-internal-tls-certs\") pod \"barbican-api-674dbc688d-knngw\" (UID: \"f9ed2521-63c1-48e5-902a-7b92102c74bb\") " pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.590706 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ed2521-63c1-48e5-902a-7b92102c74bb-public-tls-certs\") pod \"barbican-api-674dbc688d-knngw\" (UID: \"f9ed2521-63c1-48e5-902a-7b92102c74bb\") " pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.590741 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsbch\" (UniqueName: \"kubernetes.io/projected/f9ed2521-63c1-48e5-902a-7b92102c74bb-kube-api-access-jsbch\") pod \"barbican-api-674dbc688d-knngw\" (UID: \"f9ed2521-63c1-48e5-902a-7b92102c74bb\") " pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.591317 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ed2521-63c1-48e5-902a-7b92102c74bb-logs\") pod \"barbican-api-674dbc688d-knngw\" (UID: \"f9ed2521-63c1-48e5-902a-7b92102c74bb\") " pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.597161 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f9ed2521-63c1-48e5-902a-7b92102c74bb-config-data-custom\") pod \"barbican-api-674dbc688d-knngw\" (UID: \"f9ed2521-63c1-48e5-902a-7b92102c74bb\") " pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.598998 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ed2521-63c1-48e5-902a-7b92102c74bb-internal-tls-certs\") pod \"barbican-api-674dbc688d-knngw\" (UID: \"f9ed2521-63c1-48e5-902a-7b92102c74bb\") " pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.599940 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ed2521-63c1-48e5-902a-7b92102c74bb-config-data\") pod \"barbican-api-674dbc688d-knngw\" (UID: \"f9ed2521-63c1-48e5-902a-7b92102c74bb\") " pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.600336 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ed2521-63c1-48e5-902a-7b92102c74bb-public-tls-certs\") pod \"barbican-api-674dbc688d-knngw\" (UID: \"f9ed2521-63c1-48e5-902a-7b92102c74bb\") " pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.605968 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ed2521-63c1-48e5-902a-7b92102c74bb-combined-ca-bundle\") pod \"barbican-api-674dbc688d-knngw\" (UID: \"f9ed2521-63c1-48e5-902a-7b92102c74bb\") " pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.610020 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsbch\" (UniqueName: \"kubernetes.io/projected/f9ed2521-63c1-48e5-902a-7b92102c74bb-kube-api-access-jsbch\") pod \"barbican-api-674dbc688d-knngw\" (UID: \"f9ed2521-63c1-48e5-902a-7b92102c74bb\") " pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.704675 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:41 crc kubenswrapper[4858]: I0218 00:53:41.932220 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 00:53:42 crc kubenswrapper[4858]: I0218 00:53:41.999706 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 00:53:42 crc kubenswrapper[4858]: I0218 00:53:42.174981 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 00:53:42 crc kubenswrapper[4858]: I0218 00:53:42.186902 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 00:53:42 crc kubenswrapper[4858]: I0218 00:53:42.408353 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 00:53:43 crc kubenswrapper[4858]: I0218 00:53:43.126248 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-674dbc688d-knngw"] Feb 18 00:53:43 crc kubenswrapper[4858]: W0218 00:53:43.138090 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9ed2521_63c1_48e5_902a_7b92102c74bb.slice/crio-648e6751b47908458931547545a98e8c02826a4092cb65cdb625f6ca63307acf WatchSource:0}: Error finding container 648e6751b47908458931547545a98e8c02826a4092cb65cdb625f6ca63307acf: Status 404 returned error can't find the container with id 648e6751b47908458931547545a98e8c02826a4092cb65cdb625f6ca63307acf Feb 18 00:53:43 crc kubenswrapper[4858]: I0218 00:53:43.202873 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d98494dc7-ncwkj" event={"ID":"6268a0a7-fb2d-437c-9a9a-3003a640f5b6","Type":"ContainerStarted","Data":"0fc8bfe7568342dac651c6944925ace9b5f8d06aec784d3e13411d6f5e6b1863"} Feb 18 00:53:43 crc kubenswrapper[4858]: I0218 00:53:43.203223 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d98494dc7-ncwkj" event={"ID":"6268a0a7-fb2d-437c-9a9a-3003a640f5b6","Type":"ContainerStarted","Data":"078b0d6c957956baffa38ea01d8c4953b78710475a473a7d46b4359ed37b1510"} Feb 18 00:53:43 crc kubenswrapper[4858]: I0218 00:53:43.206744 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-74dd7b5ff9-wg9dt" event={"ID":"f21330fb-32fb-43a6-afdb-9337c060f960","Type":"ContainerStarted","Data":"990282b3b5f4f20fd9f2393f0c11b3ba5de86eaf4748db46df399fa016ba7895"} Feb 18 00:53:43 crc kubenswrapper[4858]: I0218 00:53:43.212139 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-684c84b858-zh69p" event={"ID":"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c","Type":"ContainerStarted","Data":"df5a1baecb296a9ddf03fb43d34534bf5773dc241d3cde3a2fd6d6646e31c71f"} Feb 18 00:53:43 crc kubenswrapper[4858]: I0218 00:53:43.217575 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b45d5d658-tw8nb" event={"ID":"cb794842-ad8f-4c9f-886b-b96df4bf5e5e","Type":"ContainerStarted","Data":"89b8bef1966524fbe0d9a3248770cf2d19feb8b5d9bcd707c3e5ef619bde8b80"} Feb 18 00:53:43 crc kubenswrapper[4858]: I0218 00:53:43.219606 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-674dbc688d-knngw" event={"ID":"f9ed2521-63c1-48e5-902a-7b92102c74bb","Type":"ContainerStarted","Data":"648e6751b47908458931547545a98e8c02826a4092cb65cdb625f6ca63307acf"} Feb 18 00:53:43 crc kubenswrapper[4858]: I0218 00:53:43.219655 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-59c64b4d54-p2c7q" podUID="596c985d-60ec-43fa-aeb2-73b10d64d750" containerName="barbican-api-log" containerID="cri-o://40696f6be4357f673b2b0b13b3225449512fb67966e2a0ab854e12008e50ef66" gracePeriod=30 Feb 18 00:53:43 crc kubenswrapper[4858]: I0218 00:53:43.219839 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-59c64b4d54-p2c7q" podUID="596c985d-60ec-43fa-aeb2-73b10d64d750" containerName="barbican-api" containerID="cri-o://f7fdfc076705b801108265e258a96ddf3671e28d86bc3478f29480e67924a110" gracePeriod=30 Feb 18 00:53:43 crc kubenswrapper[4858]: I0218 00:53:43.854242 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59c64b4d54-p2c7q" Feb 18 00:53:43 crc kubenswrapper[4858]: I0218 00:53:43.883991 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5d98494dc7-ncwkj" podStartSLOduration=3.6076102150000002 podStartE2EDuration="7.883972525s" podCreationTimestamp="2026-02-18 00:53:36 +0000 UTC" firstStartedPulling="2026-02-18 00:53:38.082594628 +0000 UTC m=+1171.388431360" lastFinishedPulling="2026-02-18 00:53:42.358956938 +0000 UTC m=+1175.664793670" observedRunningTime="2026-02-18 00:53:43.223851822 +0000 UTC m=+1176.529688554" watchObservedRunningTime="2026-02-18 00:53:43.883972525 +0000 UTC m=+1177.189809257" Feb 18 00:53:43 crc kubenswrapper[4858]: I0218 00:53:43.950340 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/596c985d-60ec-43fa-aeb2-73b10d64d750-logs\") pod \"596c985d-60ec-43fa-aeb2-73b10d64d750\" (UID: \"596c985d-60ec-43fa-aeb2-73b10d64d750\") " Feb 18 00:53:43 crc kubenswrapper[4858]: I0218 00:53:43.950418 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/596c985d-60ec-43fa-aeb2-73b10d64d750-combined-ca-bundle\") pod \"596c985d-60ec-43fa-aeb2-73b10d64d750\" (UID: \"596c985d-60ec-43fa-aeb2-73b10d64d750\") " Feb 18 00:53:43 crc kubenswrapper[4858]: I0218 00:53:43.950471 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/596c985d-60ec-43fa-aeb2-73b10d64d750-config-data-custom\") pod \"596c985d-60ec-43fa-aeb2-73b10d64d750\" (UID: \"596c985d-60ec-43fa-aeb2-73b10d64d750\") " Feb 18 00:53:43 crc kubenswrapper[4858]: I0218 00:53:43.950509 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/596c985d-60ec-43fa-aeb2-73b10d64d750-config-data\") pod \"596c985d-60ec-43fa-aeb2-73b10d64d750\" (UID: \"596c985d-60ec-43fa-aeb2-73b10d64d750\") " Feb 18 00:53:43 crc kubenswrapper[4858]: I0218 00:53:43.950558 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv589\" (UniqueName: \"kubernetes.io/projected/596c985d-60ec-43fa-aeb2-73b10d64d750-kube-api-access-kv589\") pod \"596c985d-60ec-43fa-aeb2-73b10d64d750\" (UID: \"596c985d-60ec-43fa-aeb2-73b10d64d750\") " Feb 18 00:53:43 crc kubenswrapper[4858]: I0218 00:53:43.955251 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/596c985d-60ec-43fa-aeb2-73b10d64d750-logs" (OuterVolumeSpecName: "logs") pod "596c985d-60ec-43fa-aeb2-73b10d64d750" (UID: "596c985d-60ec-43fa-aeb2-73b10d64d750"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:53:43 crc kubenswrapper[4858]: I0218 00:53:43.956158 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/596c985d-60ec-43fa-aeb2-73b10d64d750-kube-api-access-kv589" (OuterVolumeSpecName: "kube-api-access-kv589") pod "596c985d-60ec-43fa-aeb2-73b10d64d750" (UID: "596c985d-60ec-43fa-aeb2-73b10d64d750"). InnerVolumeSpecName "kube-api-access-kv589". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:53:43 crc kubenswrapper[4858]: I0218 00:53:43.957663 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/596c985d-60ec-43fa-aeb2-73b10d64d750-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "596c985d-60ec-43fa-aeb2-73b10d64d750" (UID: "596c985d-60ec-43fa-aeb2-73b10d64d750"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:43 crc kubenswrapper[4858]: I0218 00:53:43.983615 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/596c985d-60ec-43fa-aeb2-73b10d64d750-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "596c985d-60ec-43fa-aeb2-73b10d64d750" (UID: "596c985d-60ec-43fa-aeb2-73b10d64d750"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.004754 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/596c985d-60ec-43fa-aeb2-73b10d64d750-config-data" (OuterVolumeSpecName: "config-data") pod "596c985d-60ec-43fa-aeb2-73b10d64d750" (UID: "596c985d-60ec-43fa-aeb2-73b10d64d750"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.052330 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/596c985d-60ec-43fa-aeb2-73b10d64d750-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.052366 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/596c985d-60ec-43fa-aeb2-73b10d64d750-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.052376 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/596c985d-60ec-43fa-aeb2-73b10d64d750-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.052385 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv589\" (UniqueName: \"kubernetes.io/projected/596c985d-60ec-43fa-aeb2-73b10d64d750-kube-api-access-kv589\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.052396 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/596c985d-60ec-43fa-aeb2-73b10d64d750-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.237976 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-674dbc688d-knngw" event={"ID":"f9ed2521-63c1-48e5-902a-7b92102c74bb","Type":"ContainerStarted","Data":"1768b3a0f16130f4450b5153944e62edb0143c052f922469fb88661b74b7618d"} Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.238327 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-674dbc688d-knngw" event={"ID":"f9ed2521-63c1-48e5-902a-7b92102c74bb","Type":"ContainerStarted","Data":"d056e641b85fa9bb1910a797ea40b47123100979328c06f0d562dfd2e4d70b5f"} Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.238349 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.238365 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.247240 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-74dd7b5ff9-wg9dt" event={"ID":"f21330fb-32fb-43a6-afdb-9337c060f960","Type":"ContainerStarted","Data":"76a6f6efe8fcce4e0e4153724796d0df83ae9b446f7b320582f5d5c452ef3a72"} Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.262689 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-684c84b858-zh69p" event={"ID":"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c","Type":"ContainerStarted","Data":"fa36d5c24517e6823502a114c70e18f84603b0e86bcb588149fa50d929811f1d"} Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.266662 4858 generic.go:334] "Generic (PLEG): container finished" podID="f69b36cb-f694-4e90-b673-47681459414b" containerID="a083da6369422ac1d40b19b03f96614ec30f4c94c278c782d9005c5565f2464a" exitCode=0 Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.266725 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wrqgb" event={"ID":"f69b36cb-f694-4e90-b673-47681459414b","Type":"ContainerDied","Data":"a083da6369422ac1d40b19b03f96614ec30f4c94c278c782d9005c5565f2464a"} Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.268139 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-674dbc688d-knngw" podStartSLOduration=3.268127761 podStartE2EDuration="3.268127761s" podCreationTimestamp="2026-02-18 00:53:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:53:44.25446346 +0000 UTC m=+1177.560300202" watchObservedRunningTime="2026-02-18 00:53:44.268127761 +0000 UTC m=+1177.573964493" Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.272196 4858 generic.go:334] "Generic (PLEG): container finished" podID="596c985d-60ec-43fa-aeb2-73b10d64d750" containerID="f7fdfc076705b801108265e258a96ddf3671e28d86bc3478f29480e67924a110" exitCode=0 Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.272227 4858 generic.go:334] "Generic (PLEG): container finished" podID="596c985d-60ec-43fa-aeb2-73b10d64d750" containerID="40696f6be4357f673b2b0b13b3225449512fb67966e2a0ab854e12008e50ef66" exitCode=143 Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.272315 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59c64b4d54-p2c7q" Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.276768 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59c64b4d54-p2c7q" event={"ID":"596c985d-60ec-43fa-aeb2-73b10d64d750","Type":"ContainerDied","Data":"f7fdfc076705b801108265e258a96ddf3671e28d86bc3478f29480e67924a110"} Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.276825 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59c64b4d54-p2c7q" event={"ID":"596c985d-60ec-43fa-aeb2-73b10d64d750","Type":"ContainerDied","Data":"40696f6be4357f673b2b0b13b3225449512fb67966e2a0ab854e12008e50ef66"} Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.276838 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59c64b4d54-p2c7q" event={"ID":"596c985d-60ec-43fa-aeb2-73b10d64d750","Type":"ContainerDied","Data":"b765cce33687f221917059d610fc58c653527a849226a5df77ed7e1c79cafd6e"} Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.276860 4858 scope.go:117] "RemoveContainer" containerID="f7fdfc076705b801108265e258a96ddf3671e28d86bc3478f29480e67924a110" Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.278635 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-74dd7b5ff9-wg9dt" podStartSLOduration=3.989036595 podStartE2EDuration="8.278619276s" podCreationTimestamp="2026-02-18 00:53:36 +0000 UTC" firstStartedPulling="2026-02-18 00:53:38.057335586 +0000 UTC m=+1171.363172318" lastFinishedPulling="2026-02-18 00:53:42.346918267 +0000 UTC m=+1175.652754999" observedRunningTime="2026-02-18 00:53:44.276882903 +0000 UTC m=+1177.582719635" watchObservedRunningTime="2026-02-18 00:53:44.278619276 +0000 UTC m=+1177.584456008" Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.305840 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b45d5d658-tw8nb" event={"ID":"cb794842-ad8f-4c9f-886b-b96df4bf5e5e","Type":"ContainerStarted","Data":"479b7b6613ae9ab010d912cbd75709ea51e700f921d54c3c948c54401050e499"} Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.314052 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5d98494dc7-ncwkj"] Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.345108 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-684c84b858-zh69p" podStartSLOduration=4.085808889 podStartE2EDuration="8.345092606s" podCreationTimestamp="2026-02-18 00:53:36 +0000 UTC" firstStartedPulling="2026-02-18 00:53:38.093058902 +0000 UTC m=+1171.398895624" lastFinishedPulling="2026-02-18 00:53:42.352342609 +0000 UTC m=+1175.658179341" observedRunningTime="2026-02-18 00:53:44.339351917 +0000 UTC m=+1177.645188649" watchObservedRunningTime="2026-02-18 00:53:44.345092606 +0000 UTC m=+1177.650929338" Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.373093 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6b45d5d658-tw8nb" podStartSLOduration=4.679525583 podStartE2EDuration="8.373076494s" podCreationTimestamp="2026-02-18 00:53:36 +0000 UTC" firstStartedPulling="2026-02-18 00:53:38.654987005 +0000 UTC m=+1171.960823737" lastFinishedPulling="2026-02-18 00:53:42.348537916 +0000 UTC m=+1175.654374648" observedRunningTime="2026-02-18 00:53:44.35765036 +0000 UTC m=+1177.663487092" watchObservedRunningTime="2026-02-18 00:53:44.373076494 +0000 UTC m=+1177.678913226" Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.386391 4858 scope.go:117] "RemoveContainer" containerID="40696f6be4357f673b2b0b13b3225449512fb67966e2a0ab854e12008e50ef66" Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.403917 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-59c64b4d54-p2c7q"] Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.413694 4858 scope.go:117] "RemoveContainer" containerID="f7fdfc076705b801108265e258a96ddf3671e28d86bc3478f29480e67924a110" Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.416087 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-59c64b4d54-p2c7q"] Feb 18 00:53:44 crc kubenswrapper[4858]: E0218 00:53:44.416176 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7fdfc076705b801108265e258a96ddf3671e28d86bc3478f29480e67924a110\": container with ID starting with f7fdfc076705b801108265e258a96ddf3671e28d86bc3478f29480e67924a110 not found: ID does not exist" containerID="f7fdfc076705b801108265e258a96ddf3671e28d86bc3478f29480e67924a110" Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.416211 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7fdfc076705b801108265e258a96ddf3671e28d86bc3478f29480e67924a110"} err="failed to get container status \"f7fdfc076705b801108265e258a96ddf3671e28d86bc3478f29480e67924a110\": rpc error: code = NotFound desc = could not find container \"f7fdfc076705b801108265e258a96ddf3671e28d86bc3478f29480e67924a110\": container with ID starting with f7fdfc076705b801108265e258a96ddf3671e28d86bc3478f29480e67924a110 not found: ID does not exist" Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.416235 4858 scope.go:117] "RemoveContainer" containerID="40696f6be4357f673b2b0b13b3225449512fb67966e2a0ab854e12008e50ef66" Feb 18 00:53:44 crc kubenswrapper[4858]: E0218 00:53:44.419697 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40696f6be4357f673b2b0b13b3225449512fb67966e2a0ab854e12008e50ef66\": container with ID starting with 40696f6be4357f673b2b0b13b3225449512fb67966e2a0ab854e12008e50ef66 not found: ID does not exist" containerID="40696f6be4357f673b2b0b13b3225449512fb67966e2a0ab854e12008e50ef66" Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.419739 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40696f6be4357f673b2b0b13b3225449512fb67966e2a0ab854e12008e50ef66"} err="failed to get container status \"40696f6be4357f673b2b0b13b3225449512fb67966e2a0ab854e12008e50ef66\": rpc error: code = NotFound desc = could not find container \"40696f6be4357f673b2b0b13b3225449512fb67966e2a0ab854e12008e50ef66\": container with ID starting with 40696f6be4357f673b2b0b13b3225449512fb67966e2a0ab854e12008e50ef66 not found: ID does not exist" Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.419769 4858 scope.go:117] "RemoveContainer" containerID="f7fdfc076705b801108265e258a96ddf3671e28d86bc3478f29480e67924a110" Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.422359 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7fdfc076705b801108265e258a96ddf3671e28d86bc3478f29480e67924a110"} err="failed to get container status \"f7fdfc076705b801108265e258a96ddf3671e28d86bc3478f29480e67924a110\": rpc error: code = NotFound desc = could not find container \"f7fdfc076705b801108265e258a96ddf3671e28d86bc3478f29480e67924a110\": container with ID starting with f7fdfc076705b801108265e258a96ddf3671e28d86bc3478f29480e67924a110 not found: ID does not exist" Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.422399 4858 scope.go:117] "RemoveContainer" containerID="40696f6be4357f673b2b0b13b3225449512fb67966e2a0ab854e12008e50ef66" Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.422738 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40696f6be4357f673b2b0b13b3225449512fb67966e2a0ab854e12008e50ef66"} err="failed to get container status \"40696f6be4357f673b2b0b13b3225449512fb67966e2a0ab854e12008e50ef66\": rpc error: code = NotFound desc = could not find container \"40696f6be4357f673b2b0b13b3225449512fb67966e2a0ab854e12008e50ef66\": container with ID starting with 40696f6be4357f673b2b0b13b3225449512fb67966e2a0ab854e12008e50ef66 not found: ID does not exist" Feb 18 00:53:44 crc kubenswrapper[4858]: I0218 00:53:44.431955 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-684c84b858-zh69p"] Feb 18 00:53:45 crc kubenswrapper[4858]: I0218 00:53:45.332411 4858 generic.go:334] "Generic (PLEG): container finished" podID="48a7b55c-92f4-41e7-b862-45eadd76013b" containerID="cd57ef83a6af653dfb5926b9290842f9dfada85b09ef09641fff73292c3f5a89" exitCode=0 Feb 18 00:53:45 crc kubenswrapper[4858]: I0218 00:53:45.332528 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-bpmww" event={"ID":"48a7b55c-92f4-41e7-b862-45eadd76013b","Type":"ContainerDied","Data":"cd57ef83a6af653dfb5926b9290842f9dfada85b09ef09641fff73292c3f5a89"} Feb 18 00:53:45 crc kubenswrapper[4858]: I0218 00:53:45.342851 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5d98494dc7-ncwkj" podUID="6268a0a7-fb2d-437c-9a9a-3003a640f5b6" containerName="barbican-worker-log" containerID="cri-o://078b0d6c957956baffa38ea01d8c4953b78710475a473a7d46b4359ed37b1510" gracePeriod=30 Feb 18 00:53:45 crc kubenswrapper[4858]: I0218 00:53:45.342880 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5d98494dc7-ncwkj" podUID="6268a0a7-fb2d-437c-9a9a-3003a640f5b6" containerName="barbican-worker" containerID="cri-o://0fc8bfe7568342dac651c6944925ace9b5f8d06aec784d3e13411d6f5e6b1863" gracePeriod=30 Feb 18 00:53:45 crc kubenswrapper[4858]: I0218 00:53:45.437921 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="596c985d-60ec-43fa-aeb2-73b10d64d750" path="/var/lib/kubelet/pods/596c985d-60ec-43fa-aeb2-73b10d64d750/volumes" Feb 18 00:53:46 crc kubenswrapper[4858]: I0218 00:53:46.353865 4858 generic.go:334] "Generic (PLEG): container finished" podID="6268a0a7-fb2d-437c-9a9a-3003a640f5b6" containerID="0fc8bfe7568342dac651c6944925ace9b5f8d06aec784d3e13411d6f5e6b1863" exitCode=0 Feb 18 00:53:46 crc kubenswrapper[4858]: I0218 00:53:46.354542 4858 generic.go:334] "Generic (PLEG): container finished" podID="6268a0a7-fb2d-437c-9a9a-3003a640f5b6" containerID="078b0d6c957956baffa38ea01d8c4953b78710475a473a7d46b4359ed37b1510" exitCode=143 Feb 18 00:53:46 crc kubenswrapper[4858]: I0218 00:53:46.353927 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d98494dc7-ncwkj" event={"ID":"6268a0a7-fb2d-437c-9a9a-3003a640f5b6","Type":"ContainerDied","Data":"0fc8bfe7568342dac651c6944925ace9b5f8d06aec784d3e13411d6f5e6b1863"} Feb 18 00:53:46 crc kubenswrapper[4858]: I0218 00:53:46.354658 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d98494dc7-ncwkj" event={"ID":"6268a0a7-fb2d-437c-9a9a-3003a640f5b6","Type":"ContainerDied","Data":"078b0d6c957956baffa38ea01d8c4953b78710475a473a7d46b4359ed37b1510"} Feb 18 00:53:46 crc kubenswrapper[4858]: I0218 00:53:46.354802 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-684c84b858-zh69p" podUID="0450ba6b-8a46-4e32-aebb-2021a7d0ff8c" containerName="barbican-keystone-listener-log" containerID="cri-o://df5a1baecb296a9ddf03fb43d34534bf5773dc241d3cde3a2fd6d6646e31c71f" gracePeriod=30 Feb 18 00:53:46 crc kubenswrapper[4858]: I0218 00:53:46.354863 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-684c84b858-zh69p" podUID="0450ba6b-8a46-4e32-aebb-2021a7d0ff8c" containerName="barbican-keystone-listener" containerID="cri-o://fa36d5c24517e6823502a114c70e18f84603b0e86bcb588149fa50d929811f1d" gracePeriod=30 Feb 18 00:53:46 crc kubenswrapper[4858]: I0218 00:53:46.911654 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" Feb 18 00:53:47 crc kubenswrapper[4858]: I0218 00:53:47.004830 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-4vw4g"] Feb 18 00:53:47 crc kubenswrapper[4858]: I0218 00:53:47.005071 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" podUID="d6419d4c-77e6-41c9-bcbf-e2cc5043232c" containerName="dnsmasq-dns" containerID="cri-o://0eeb0956f7ba5140aa721be8c77a35d5fa0090b45aa2bac45100e425213d1c32" gracePeriod=10 Feb 18 00:53:47 crc kubenswrapper[4858]: I0218 00:53:47.371720 4858 generic.go:334] "Generic (PLEG): container finished" podID="d6419d4c-77e6-41c9-bcbf-e2cc5043232c" containerID="0eeb0956f7ba5140aa721be8c77a35d5fa0090b45aa2bac45100e425213d1c32" exitCode=0 Feb 18 00:53:47 crc kubenswrapper[4858]: I0218 00:53:47.371816 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" event={"ID":"d6419d4c-77e6-41c9-bcbf-e2cc5043232c","Type":"ContainerDied","Data":"0eeb0956f7ba5140aa721be8c77a35d5fa0090b45aa2bac45100e425213d1c32"} Feb 18 00:53:47 crc kubenswrapper[4858]: I0218 00:53:47.374637 4858 generic.go:334] "Generic (PLEG): container finished" podID="0450ba6b-8a46-4e32-aebb-2021a7d0ff8c" containerID="fa36d5c24517e6823502a114c70e18f84603b0e86bcb588149fa50d929811f1d" exitCode=0 Feb 18 00:53:47 crc kubenswrapper[4858]: I0218 00:53:47.374670 4858 generic.go:334] "Generic (PLEG): container finished" podID="0450ba6b-8a46-4e32-aebb-2021a7d0ff8c" containerID="df5a1baecb296a9ddf03fb43d34534bf5773dc241d3cde3a2fd6d6646e31c71f" exitCode=143 Feb 18 00:53:47 crc kubenswrapper[4858]: I0218 00:53:47.374686 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-684c84b858-zh69p" event={"ID":"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c","Type":"ContainerDied","Data":"fa36d5c24517e6823502a114c70e18f84603b0e86bcb588149fa50d929811f1d"} Feb 18 00:53:47 crc kubenswrapper[4858]: I0218 00:53:47.374702 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-684c84b858-zh69p" event={"ID":"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c","Type":"ContainerDied","Data":"df5a1baecb296a9ddf03fb43d34534bf5773dc241d3cde3a2fd6d6646e31c71f"} Feb 18 00:53:47 crc kubenswrapper[4858]: I0218 00:53:47.610474 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-657d9bcf46-vrpmh" podUID="283581c7-0894-47b6-b933-f36c54f50e4b" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:53:47 crc kubenswrapper[4858]: I0218 00:53:47.983641 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5fcf66f4c6-vkspn" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.316838 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6f78847c8f-hz7xt"] Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.317120 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6f78847c8f-hz7xt" podUID="f15c5e19-1645-4791-8981-2216c5be654b" containerName="neutron-api" containerID="cri-o://2cdc75157a3e22acce190039d63927731744843656fae2003afbeb33ed574364" gracePeriod=30 Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.317202 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6f78847c8f-hz7xt" podUID="f15c5e19-1645-4791-8981-2216c5be654b" containerName="neutron-httpd" containerID="cri-o://e498507e534f9b9712c7af1d4443f81de4bc7b4ffabd92ef3be6b82a7eda3f54" gracePeriod=30 Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.327731 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6f78847c8f-hz7xt" podUID="f15c5e19-1645-4791-8981-2216c5be654b" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.173:9696/\": EOF" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.354280 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-79f994c65-x27nl"] Feb 18 00:53:48 crc kubenswrapper[4858]: E0218 00:53:48.354702 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="596c985d-60ec-43fa-aeb2-73b10d64d750" containerName="barbican-api-log" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.354718 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="596c985d-60ec-43fa-aeb2-73b10d64d750" containerName="barbican-api-log" Feb 18 00:53:48 crc kubenswrapper[4858]: E0218 00:53:48.354743 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="596c985d-60ec-43fa-aeb2-73b10d64d750" containerName="barbican-api" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.354749 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="596c985d-60ec-43fa-aeb2-73b10d64d750" containerName="barbican-api" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.354918 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="596c985d-60ec-43fa-aeb2-73b10d64d750" containerName="barbican-api-log" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.354935 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="596c985d-60ec-43fa-aeb2-73b10d64d750" containerName="barbican-api" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.356061 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-79f994c65-x27nl" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.372225 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-79f994c65-x27nl"] Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.386235 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1f825a6-aa98-4e73-a29c-4b829bf606d6-public-tls-certs\") pod \"neutron-79f994c65-x27nl\" (UID: \"d1f825a6-aa98-4e73-a29c-4b829bf606d6\") " pod="openstack/neutron-79f994c65-x27nl" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.386279 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1f825a6-aa98-4e73-a29c-4b829bf606d6-internal-tls-certs\") pod \"neutron-79f994c65-x27nl\" (UID: \"d1f825a6-aa98-4e73-a29c-4b829bf606d6\") " pod="openstack/neutron-79f994c65-x27nl" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.386377 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1f825a6-aa98-4e73-a29c-4b829bf606d6-ovndb-tls-certs\") pod \"neutron-79f994c65-x27nl\" (UID: \"d1f825a6-aa98-4e73-a29c-4b829bf606d6\") " pod="openstack/neutron-79f994c65-x27nl" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.386410 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f825a6-aa98-4e73-a29c-4b829bf606d6-combined-ca-bundle\") pod \"neutron-79f994c65-x27nl\" (UID: \"d1f825a6-aa98-4e73-a29c-4b829bf606d6\") " pod="openstack/neutron-79f994c65-x27nl" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.386483 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1f825a6-aa98-4e73-a29c-4b829bf606d6-config\") pod \"neutron-79f994c65-x27nl\" (UID: \"d1f825a6-aa98-4e73-a29c-4b829bf606d6\") " pod="openstack/neutron-79f994c65-x27nl" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.386648 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxq2b\" (UniqueName: \"kubernetes.io/projected/d1f825a6-aa98-4e73-a29c-4b829bf606d6-kube-api-access-zxq2b\") pod \"neutron-79f994c65-x27nl\" (UID: \"d1f825a6-aa98-4e73-a29c-4b829bf606d6\") " pod="openstack/neutron-79f994c65-x27nl" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.386740 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d1f825a6-aa98-4e73-a29c-4b829bf606d6-httpd-config\") pod \"neutron-79f994c65-x27nl\" (UID: \"d1f825a6-aa98-4e73-a29c-4b829bf606d6\") " pod="openstack/neutron-79f994c65-x27nl" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.491342 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxq2b\" (UniqueName: \"kubernetes.io/projected/d1f825a6-aa98-4e73-a29c-4b829bf606d6-kube-api-access-zxq2b\") pod \"neutron-79f994c65-x27nl\" (UID: \"d1f825a6-aa98-4e73-a29c-4b829bf606d6\") " pod="openstack/neutron-79f994c65-x27nl" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.491869 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d1f825a6-aa98-4e73-a29c-4b829bf606d6-httpd-config\") pod \"neutron-79f994c65-x27nl\" (UID: \"d1f825a6-aa98-4e73-a29c-4b829bf606d6\") " pod="openstack/neutron-79f994c65-x27nl" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.492024 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1f825a6-aa98-4e73-a29c-4b829bf606d6-public-tls-certs\") pod \"neutron-79f994c65-x27nl\" (UID: \"d1f825a6-aa98-4e73-a29c-4b829bf606d6\") " pod="openstack/neutron-79f994c65-x27nl" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.492069 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1f825a6-aa98-4e73-a29c-4b829bf606d6-internal-tls-certs\") pod \"neutron-79f994c65-x27nl\" (UID: \"d1f825a6-aa98-4e73-a29c-4b829bf606d6\") " pod="openstack/neutron-79f994c65-x27nl" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.492166 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1f825a6-aa98-4e73-a29c-4b829bf606d6-ovndb-tls-certs\") pod \"neutron-79f994c65-x27nl\" (UID: \"d1f825a6-aa98-4e73-a29c-4b829bf606d6\") " pod="openstack/neutron-79f994c65-x27nl" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.492214 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f825a6-aa98-4e73-a29c-4b829bf606d6-combined-ca-bundle\") pod \"neutron-79f994c65-x27nl\" (UID: \"d1f825a6-aa98-4e73-a29c-4b829bf606d6\") " pod="openstack/neutron-79f994c65-x27nl" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.492281 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1f825a6-aa98-4e73-a29c-4b829bf606d6-config\") pod \"neutron-79f994c65-x27nl\" (UID: \"d1f825a6-aa98-4e73-a29c-4b829bf606d6\") " pod="openstack/neutron-79f994c65-x27nl" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.507249 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1f825a6-aa98-4e73-a29c-4b829bf606d6-ovndb-tls-certs\") pod \"neutron-79f994c65-x27nl\" (UID: \"d1f825a6-aa98-4e73-a29c-4b829bf606d6\") " pod="openstack/neutron-79f994c65-x27nl" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.507640 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1f825a6-aa98-4e73-a29c-4b829bf606d6-public-tls-certs\") pod \"neutron-79f994c65-x27nl\" (UID: \"d1f825a6-aa98-4e73-a29c-4b829bf606d6\") " pod="openstack/neutron-79f994c65-x27nl" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.507902 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d1f825a6-aa98-4e73-a29c-4b829bf606d6-httpd-config\") pod \"neutron-79f994c65-x27nl\" (UID: \"d1f825a6-aa98-4e73-a29c-4b829bf606d6\") " pod="openstack/neutron-79f994c65-x27nl" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.509374 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f825a6-aa98-4e73-a29c-4b829bf606d6-combined-ca-bundle\") pod \"neutron-79f994c65-x27nl\" (UID: \"d1f825a6-aa98-4e73-a29c-4b829bf606d6\") " pod="openstack/neutron-79f994c65-x27nl" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.521406 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1f825a6-aa98-4e73-a29c-4b829bf606d6-config\") pod \"neutron-79f994c65-x27nl\" (UID: \"d1f825a6-aa98-4e73-a29c-4b829bf606d6\") " pod="openstack/neutron-79f994c65-x27nl" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.521982 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxq2b\" (UniqueName: \"kubernetes.io/projected/d1f825a6-aa98-4e73-a29c-4b829bf606d6-kube-api-access-zxq2b\") pod \"neutron-79f994c65-x27nl\" (UID: \"d1f825a6-aa98-4e73-a29c-4b829bf606d6\") " pod="openstack/neutron-79f994c65-x27nl" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.556564 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1f825a6-aa98-4e73-a29c-4b829bf606d6-internal-tls-certs\") pod \"neutron-79f994c65-x27nl\" (UID: \"d1f825a6-aa98-4e73-a29c-4b829bf606d6\") " pod="openstack/neutron-79f994c65-x27nl" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.691099 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-657d9bcf46-vrpmh" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.701094 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-79f994c65-x27nl" Feb 18 00:53:48 crc kubenswrapper[4858]: I0218 00:53:48.829970 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-657d9bcf46-vrpmh" Feb 18 00:53:49 crc kubenswrapper[4858]: I0218 00:53:49.434668 4858 generic.go:334] "Generic (PLEG): container finished" podID="f15c5e19-1645-4791-8981-2216c5be654b" containerID="e498507e534f9b9712c7af1d4443f81de4bc7b4ffabd92ef3be6b82a7eda3f54" exitCode=0 Feb 18 00:53:49 crc kubenswrapper[4858]: I0218 00:53:49.434709 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f78847c8f-hz7xt" event={"ID":"f15c5e19-1645-4791-8981-2216c5be654b","Type":"ContainerDied","Data":"e498507e534f9b9712c7af1d4443f81de4bc7b4ffabd92ef3be6b82a7eda3f54"} Feb 18 00:53:49 crc kubenswrapper[4858]: I0218 00:53:49.625718 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-bpmww" Feb 18 00:53:49 crc kubenswrapper[4858]: I0218 00:53:49.820685 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48a7b55c-92f4-41e7-b862-45eadd76013b-scripts\") pod \"48a7b55c-92f4-41e7-b862-45eadd76013b\" (UID: \"48a7b55c-92f4-41e7-b862-45eadd76013b\") " Feb 18 00:53:49 crc kubenswrapper[4858]: I0218 00:53:49.820789 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a7b55c-92f4-41e7-b862-45eadd76013b-combined-ca-bundle\") pod \"48a7b55c-92f4-41e7-b862-45eadd76013b\" (UID: \"48a7b55c-92f4-41e7-b862-45eadd76013b\") " Feb 18 00:53:49 crc kubenswrapper[4858]: I0218 00:53:49.820848 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkksg\" (UniqueName: \"kubernetes.io/projected/48a7b55c-92f4-41e7-b862-45eadd76013b-kube-api-access-vkksg\") pod \"48a7b55c-92f4-41e7-b862-45eadd76013b\" (UID: \"48a7b55c-92f4-41e7-b862-45eadd76013b\") " Feb 18 00:53:49 crc kubenswrapper[4858]: I0218 00:53:49.820914 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/48a7b55c-92f4-41e7-b862-45eadd76013b-certs\") pod \"48a7b55c-92f4-41e7-b862-45eadd76013b\" (UID: \"48a7b55c-92f4-41e7-b862-45eadd76013b\") " Feb 18 00:53:49 crc kubenswrapper[4858]: I0218 00:53:49.820957 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a7b55c-92f4-41e7-b862-45eadd76013b-config-data\") pod \"48a7b55c-92f4-41e7-b862-45eadd76013b\" (UID: \"48a7b55c-92f4-41e7-b862-45eadd76013b\") " Feb 18 00:53:49 crc kubenswrapper[4858]: I0218 00:53:49.826377 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48a7b55c-92f4-41e7-b862-45eadd76013b-kube-api-access-vkksg" (OuterVolumeSpecName: "kube-api-access-vkksg") pod "48a7b55c-92f4-41e7-b862-45eadd76013b" (UID: "48a7b55c-92f4-41e7-b862-45eadd76013b"). InnerVolumeSpecName "kube-api-access-vkksg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:53:49 crc kubenswrapper[4858]: I0218 00:53:49.874109 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48a7b55c-92f4-41e7-b862-45eadd76013b-certs" (OuterVolumeSpecName: "certs") pod "48a7b55c-92f4-41e7-b862-45eadd76013b" (UID: "48a7b55c-92f4-41e7-b862-45eadd76013b"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:53:49 crc kubenswrapper[4858]: I0218 00:53:49.887671 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48a7b55c-92f4-41e7-b862-45eadd76013b-scripts" (OuterVolumeSpecName: "scripts") pod "48a7b55c-92f4-41e7-b862-45eadd76013b" (UID: "48a7b55c-92f4-41e7-b862-45eadd76013b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:49 crc kubenswrapper[4858]: I0218 00:53:49.900683 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48a7b55c-92f4-41e7-b862-45eadd76013b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48a7b55c-92f4-41e7-b862-45eadd76013b" (UID: "48a7b55c-92f4-41e7-b862-45eadd76013b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:49 crc kubenswrapper[4858]: I0218 00:53:49.911772 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48a7b55c-92f4-41e7-b862-45eadd76013b-config-data" (OuterVolumeSpecName: "config-data") pod "48a7b55c-92f4-41e7-b862-45eadd76013b" (UID: "48a7b55c-92f4-41e7-b862-45eadd76013b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:49 crc kubenswrapper[4858]: I0218 00:53:49.923358 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkksg\" (UniqueName: \"kubernetes.io/projected/48a7b55c-92f4-41e7-b862-45eadd76013b-kube-api-access-vkksg\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:49 crc kubenswrapper[4858]: I0218 00:53:49.923385 4858 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/48a7b55c-92f4-41e7-b862-45eadd76013b-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:49 crc kubenswrapper[4858]: I0218 00:53:49.923396 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a7b55c-92f4-41e7-b862-45eadd76013b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:49 crc kubenswrapper[4858]: I0218 00:53:49.923405 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48a7b55c-92f4-41e7-b862-45eadd76013b-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:49 crc kubenswrapper[4858]: I0218 00:53:49.923412 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a7b55c-92f4-41e7-b862-45eadd76013b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.032644 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6f78847c8f-hz7xt" podUID="f15c5e19-1645-4791-8981-2216c5be654b" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.173:9696/\": dial tcp 10.217.0.173:9696: connect: connection refused" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.446675 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-bpmww" event={"ID":"48a7b55c-92f4-41e7-b862-45eadd76013b","Type":"ContainerDied","Data":"e5569a8c4230ba8ff5f002bf109c5a55b4488cb55384826672d543c1b5f15117"} Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.446712 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5569a8c4230ba8ff5f002bf109c5a55b4488cb55384826672d543c1b5f15117" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.446750 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-bpmww" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.793968 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-cz4n9"] Feb 18 00:53:50 crc kubenswrapper[4858]: E0218 00:53:50.794524 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a7b55c-92f4-41e7-b862-45eadd76013b" containerName="cloudkitty-db-sync" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.794537 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a7b55c-92f4-41e7-b862-45eadd76013b" containerName="cloudkitty-db-sync" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.794760 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="48a7b55c-92f4-41e7-b862-45eadd76013b" containerName="cloudkitty-db-sync" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.795409 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-cz4n9" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.800848 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.801049 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-cz4n9"] Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.801098 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.801356 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.802081 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-jbnsw" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.802228 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.828673 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wrqgb" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.829307 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.856630 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mngv7\" (UniqueName: \"kubernetes.io/projected/cbda3331-08bc-49a1-8cf2-f24700bf4a89-kube-api-access-mngv7\") pod \"cloudkitty-storageinit-cz4n9\" (UID: \"cbda3331-08bc-49a1-8cf2-f24700bf4a89\") " pod="openstack/cloudkitty-storageinit-cz4n9" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.856695 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbda3331-08bc-49a1-8cf2-f24700bf4a89-config-data\") pod \"cloudkitty-storageinit-cz4n9\" (UID: \"cbda3331-08bc-49a1-8cf2-f24700bf4a89\") " pod="openstack/cloudkitty-storageinit-cz4n9" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.856749 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbda3331-08bc-49a1-8cf2-f24700bf4a89-scripts\") pod \"cloudkitty-storageinit-cz4n9\" (UID: \"cbda3331-08bc-49a1-8cf2-f24700bf4a89\") " pod="openstack/cloudkitty-storageinit-cz4n9" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.856779 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbda3331-08bc-49a1-8cf2-f24700bf4a89-combined-ca-bundle\") pod \"cloudkitty-storageinit-cz4n9\" (UID: \"cbda3331-08bc-49a1-8cf2-f24700bf4a89\") " pod="openstack/cloudkitty-storageinit-cz4n9" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.856947 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cbda3331-08bc-49a1-8cf2-f24700bf4a89-certs\") pod \"cloudkitty-storageinit-cz4n9\" (UID: \"cbda3331-08bc-49a1-8cf2-f24700bf4a89\") " pod="openstack/cloudkitty-storageinit-cz4n9" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.960261 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f69b36cb-f694-4e90-b673-47681459414b-scripts\") pod \"f69b36cb-f694-4e90-b673-47681459414b\" (UID: \"f69b36cb-f694-4e90-b673-47681459414b\") " Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.960383 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-config\") pod \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\" (UID: \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\") " Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.960453 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69b36cb-f694-4e90-b673-47681459414b-config-data\") pod \"f69b36cb-f694-4e90-b673-47681459414b\" (UID: \"f69b36cb-f694-4e90-b673-47681459414b\") " Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.960511 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f69b36cb-f694-4e90-b673-47681459414b-db-sync-config-data\") pod \"f69b36cb-f694-4e90-b673-47681459414b\" (UID: \"f69b36cb-f694-4e90-b673-47681459414b\") " Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.960553 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69b36cb-f694-4e90-b673-47681459414b-combined-ca-bundle\") pod \"f69b36cb-f694-4e90-b673-47681459414b\" (UID: \"f69b36cb-f694-4e90-b673-47681459414b\") " Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.960628 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-dns-svc\") pod \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\" (UID: \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\") " Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.960689 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f69b36cb-f694-4e90-b673-47681459414b-etc-machine-id\") pod \"f69b36cb-f694-4e90-b673-47681459414b\" (UID: \"f69b36cb-f694-4e90-b673-47681459414b\") " Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.960741 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv8mb\" (UniqueName: \"kubernetes.io/projected/f69b36cb-f694-4e90-b673-47681459414b-kube-api-access-dv8mb\") pod \"f69b36cb-f694-4e90-b673-47681459414b\" (UID: \"f69b36cb-f694-4e90-b673-47681459414b\") " Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.960790 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-ovsdbserver-sb\") pod \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\" (UID: \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\") " Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.960825 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-ovsdbserver-nb\") pod \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\" (UID: \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\") " Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.960865 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-dns-swift-storage-0\") pod \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\" (UID: \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\") " Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.960918 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzx9p\" (UniqueName: \"kubernetes.io/projected/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-kube-api-access-dzx9p\") pod \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\" (UID: \"d6419d4c-77e6-41c9-bcbf-e2cc5043232c\") " Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.961162 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbda3331-08bc-49a1-8cf2-f24700bf4a89-scripts\") pod \"cloudkitty-storageinit-cz4n9\" (UID: \"cbda3331-08bc-49a1-8cf2-f24700bf4a89\") " pod="openstack/cloudkitty-storageinit-cz4n9" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.961229 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbda3331-08bc-49a1-8cf2-f24700bf4a89-combined-ca-bundle\") pod \"cloudkitty-storageinit-cz4n9\" (UID: \"cbda3331-08bc-49a1-8cf2-f24700bf4a89\") " pod="openstack/cloudkitty-storageinit-cz4n9" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.961273 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cbda3331-08bc-49a1-8cf2-f24700bf4a89-certs\") pod \"cloudkitty-storageinit-cz4n9\" (UID: \"cbda3331-08bc-49a1-8cf2-f24700bf4a89\") " pod="openstack/cloudkitty-storageinit-cz4n9" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.961336 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f69b36cb-f694-4e90-b673-47681459414b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f69b36cb-f694-4e90-b673-47681459414b" (UID: "f69b36cb-f694-4e90-b673-47681459414b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.961422 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mngv7\" (UniqueName: \"kubernetes.io/projected/cbda3331-08bc-49a1-8cf2-f24700bf4a89-kube-api-access-mngv7\") pod \"cloudkitty-storageinit-cz4n9\" (UID: \"cbda3331-08bc-49a1-8cf2-f24700bf4a89\") " pod="openstack/cloudkitty-storageinit-cz4n9" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.961485 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbda3331-08bc-49a1-8cf2-f24700bf4a89-config-data\") pod \"cloudkitty-storageinit-cz4n9\" (UID: \"cbda3331-08bc-49a1-8cf2-f24700bf4a89\") " pod="openstack/cloudkitty-storageinit-cz4n9" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.961603 4858 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f69b36cb-f694-4e90-b673-47681459414b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.968222 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbda3331-08bc-49a1-8cf2-f24700bf4a89-combined-ca-bundle\") pod \"cloudkitty-storageinit-cz4n9\" (UID: \"cbda3331-08bc-49a1-8cf2-f24700bf4a89\") " pod="openstack/cloudkitty-storageinit-cz4n9" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.968786 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69b36cb-f694-4e90-b673-47681459414b-scripts" (OuterVolumeSpecName: "scripts") pod "f69b36cb-f694-4e90-b673-47681459414b" (UID: "f69b36cb-f694-4e90-b673-47681459414b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.974023 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f69b36cb-f694-4e90-b673-47681459414b-kube-api-access-dv8mb" (OuterVolumeSpecName: "kube-api-access-dv8mb") pod "f69b36cb-f694-4e90-b673-47681459414b" (UID: "f69b36cb-f694-4e90-b673-47681459414b"). InnerVolumeSpecName "kube-api-access-dv8mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.985346 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbda3331-08bc-49a1-8cf2-f24700bf4a89-config-data\") pod \"cloudkitty-storageinit-cz4n9\" (UID: \"cbda3331-08bc-49a1-8cf2-f24700bf4a89\") " pod="openstack/cloudkitty-storageinit-cz4n9" Feb 18 00:53:50 crc kubenswrapper[4858]: I0218 00:53:50.987975 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mngv7\" (UniqueName: \"kubernetes.io/projected/cbda3331-08bc-49a1-8cf2-f24700bf4a89-kube-api-access-mngv7\") pod \"cloudkitty-storageinit-cz4n9\" (UID: \"cbda3331-08bc-49a1-8cf2-f24700bf4a89\") " pod="openstack/cloudkitty-storageinit-cz4n9" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.000835 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-kube-api-access-dzx9p" (OuterVolumeSpecName: "kube-api-access-dzx9p") pod "d6419d4c-77e6-41c9-bcbf-e2cc5043232c" (UID: "d6419d4c-77e6-41c9-bcbf-e2cc5043232c"). InnerVolumeSpecName "kube-api-access-dzx9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.003140 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cbda3331-08bc-49a1-8cf2-f24700bf4a89-certs\") pod \"cloudkitty-storageinit-cz4n9\" (UID: \"cbda3331-08bc-49a1-8cf2-f24700bf4a89\") " pod="openstack/cloudkitty-storageinit-cz4n9" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.003626 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69b36cb-f694-4e90-b673-47681459414b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f69b36cb-f694-4e90-b673-47681459414b" (UID: "f69b36cb-f694-4e90-b673-47681459414b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.026870 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbda3331-08bc-49a1-8cf2-f24700bf4a89-scripts\") pod \"cloudkitty-storageinit-cz4n9\" (UID: \"cbda3331-08bc-49a1-8cf2-f24700bf4a89\") " pod="openstack/cloudkitty-storageinit-cz4n9" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.063222 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f69b36cb-f694-4e90-b673-47681459414b-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.063247 4858 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f69b36cb-f694-4e90-b673-47681459414b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.063257 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv8mb\" (UniqueName: \"kubernetes.io/projected/f69b36cb-f694-4e90-b673-47681459414b-kube-api-access-dv8mb\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.063267 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzx9p\" (UniqueName: \"kubernetes.io/projected/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-kube-api-access-dzx9p\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.098850 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d6419d4c-77e6-41c9-bcbf-e2cc5043232c" (UID: "d6419d4c-77e6-41c9-bcbf-e2cc5043232c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.132660 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69b36cb-f694-4e90-b673-47681459414b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f69b36cb-f694-4e90-b673-47681459414b" (UID: "f69b36cb-f694-4e90-b673-47681459414b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.133294 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d6419d4c-77e6-41c9-bcbf-e2cc5043232c" (UID: "d6419d4c-77e6-41c9-bcbf-e2cc5043232c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.154816 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-cz4n9" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.166696 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.166805 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69b36cb-f694-4e90-b673-47681459414b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.166865 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.176558 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-config" (OuterVolumeSpecName: "config") pod "d6419d4c-77e6-41c9-bcbf-e2cc5043232c" (UID: "d6419d4c-77e6-41c9-bcbf-e2cc5043232c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.179148 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d6419d4c-77e6-41c9-bcbf-e2cc5043232c" (UID: "d6419d4c-77e6-41c9-bcbf-e2cc5043232c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.183506 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69b36cb-f694-4e90-b673-47681459414b-config-data" (OuterVolumeSpecName: "config-data") pod "f69b36cb-f694-4e90-b673-47681459414b" (UID: "f69b36cb-f694-4e90-b673-47681459414b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.208110 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d6419d4c-77e6-41c9-bcbf-e2cc5043232c" (UID: "d6419d4c-77e6-41c9-bcbf-e2cc5043232c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.269107 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.269136 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.269147 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6419d4c-77e6-41c9-bcbf-e2cc5043232c-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.269158 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69b36cb-f694-4e90-b673-47681459414b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.592416 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" event={"ID":"d6419d4c-77e6-41c9-bcbf-e2cc5043232c","Type":"ContainerDied","Data":"59e61a4ea9605a31d2efe55e806331cc260cfd3db87f1b4166cb1419a45f5881"} Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.593736 4858 scope.go:117] "RemoveContainer" containerID="0eeb0956f7ba5140aa721be8c77a35d5fa0090b45aa2bac45100e425213d1c32" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.593975 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-4vw4g" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.612969 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wrqgb" event={"ID":"f69b36cb-f694-4e90-b673-47681459414b","Type":"ContainerDied","Data":"c3fe06be448b77fdb06f527c95e8f51312415c9e4eed381ce921e9ea0d5b28b4"} Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.613243 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3fe06be448b77fdb06f527c95e8f51312415c9e4eed381ce921e9ea0d5b28b4" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.613438 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wrqgb" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.644623 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-4vw4g"] Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.655281 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-4vw4g"] Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.896616 4858 scope.go:117] "RemoveContainer" containerID="6f80e1a6f7575f9484e5569059acac416fd5cc0fa571fec14f6b233ff423073e" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.970644 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5d98494dc7-ncwkj" Feb 18 00:53:51 crc kubenswrapper[4858]: I0218 00:53:51.976047 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-684c84b858-zh69p" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.036183 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-combined-ca-bundle\") pod \"6268a0a7-fb2d-437c-9a9a-3003a640f5b6\" (UID: \"6268a0a7-fb2d-437c-9a9a-3003a640f5b6\") " Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.036358 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-logs\") pod \"6268a0a7-fb2d-437c-9a9a-3003a640f5b6\" (UID: \"6268a0a7-fb2d-437c-9a9a-3003a640f5b6\") " Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.036445 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-config-data\") pod \"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c\" (UID: \"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c\") " Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.036556 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-logs\") pod \"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c\" (UID: \"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c\") " Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.036639 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-config-data-custom\") pod \"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c\" (UID: \"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c\") " Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.037095 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-config-data\") pod \"6268a0a7-fb2d-437c-9a9a-3003a640f5b6\" (UID: \"6268a0a7-fb2d-437c-9a9a-3003a640f5b6\") " Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.037234 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-combined-ca-bundle\") pod \"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c\" (UID: \"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c\") " Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.037364 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcwmc\" (UniqueName: \"kubernetes.io/projected/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-kube-api-access-xcwmc\") pod \"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c\" (UID: \"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c\") " Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.037442 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-config-data-custom\") pod \"6268a0a7-fb2d-437c-9a9a-3003a640f5b6\" (UID: \"6268a0a7-fb2d-437c-9a9a-3003a640f5b6\") " Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.037660 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wh8j\" (UniqueName: \"kubernetes.io/projected/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-kube-api-access-9wh8j\") pod \"6268a0a7-fb2d-437c-9a9a-3003a640f5b6\" (UID: \"6268a0a7-fb2d-437c-9a9a-3003a640f5b6\") " Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.047198 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0450ba6b-8a46-4e32-aebb-2021a7d0ff8c" (UID: "0450ba6b-8a46-4e32-aebb-2021a7d0ff8c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.048586 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-logs" (OuterVolumeSpecName: "logs") pod "6268a0a7-fb2d-437c-9a9a-3003a640f5b6" (UID: "6268a0a7-fb2d-437c-9a9a-3003a640f5b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.050734 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-logs" (OuterVolumeSpecName: "logs") pod "0450ba6b-8a46-4e32-aebb-2021a7d0ff8c" (UID: "0450ba6b-8a46-4e32-aebb-2021a7d0ff8c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.052759 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-kube-api-access-9wh8j" (OuterVolumeSpecName: "kube-api-access-9wh8j") pod "6268a0a7-fb2d-437c-9a9a-3003a640f5b6" (UID: "6268a0a7-fb2d-437c-9a9a-3003a640f5b6"). InnerVolumeSpecName "kube-api-access-9wh8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.061725 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-kube-api-access-xcwmc" (OuterVolumeSpecName: "kube-api-access-xcwmc") pod "0450ba6b-8a46-4e32-aebb-2021a7d0ff8c" (UID: "0450ba6b-8a46-4e32-aebb-2021a7d0ff8c"). InnerVolumeSpecName "kube-api-access-xcwmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.109043 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6268a0a7-fb2d-437c-9a9a-3003a640f5b6" (UID: "6268a0a7-fb2d-437c-9a9a-3003a640f5b6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.154891 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcwmc\" (UniqueName: \"kubernetes.io/projected/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-kube-api-access-xcwmc\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.154917 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.154927 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wh8j\" (UniqueName: \"kubernetes.io/projected/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-kube-api-access-9wh8j\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.154935 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.154943 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.154951 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.187641 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0450ba6b-8a46-4e32-aebb-2021a7d0ff8c" (UID: "0450ba6b-8a46-4e32-aebb-2021a7d0ff8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.191162 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 00:53:52 crc kubenswrapper[4858]: E0218 00:53:52.191717 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6419d4c-77e6-41c9-bcbf-e2cc5043232c" containerName="init" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.191732 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6419d4c-77e6-41c9-bcbf-e2cc5043232c" containerName="init" Feb 18 00:53:52 crc kubenswrapper[4858]: E0218 00:53:52.191750 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6268a0a7-fb2d-437c-9a9a-3003a640f5b6" containerName="barbican-worker" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.191755 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6268a0a7-fb2d-437c-9a9a-3003a640f5b6" containerName="barbican-worker" Feb 18 00:53:52 crc kubenswrapper[4858]: E0218 00:53:52.191773 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0450ba6b-8a46-4e32-aebb-2021a7d0ff8c" containerName="barbican-keystone-listener" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.191778 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0450ba6b-8a46-4e32-aebb-2021a7d0ff8c" containerName="barbican-keystone-listener" Feb 18 00:53:52 crc kubenswrapper[4858]: E0218 00:53:52.191792 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f69b36cb-f694-4e90-b673-47681459414b" containerName="cinder-db-sync" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.191797 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f69b36cb-f694-4e90-b673-47681459414b" containerName="cinder-db-sync" Feb 18 00:53:52 crc kubenswrapper[4858]: E0218 00:53:52.191807 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6419d4c-77e6-41c9-bcbf-e2cc5043232c" containerName="dnsmasq-dns" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.191812 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6419d4c-77e6-41c9-bcbf-e2cc5043232c" containerName="dnsmasq-dns" Feb 18 00:53:52 crc kubenswrapper[4858]: E0218 00:53:52.191825 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0450ba6b-8a46-4e32-aebb-2021a7d0ff8c" containerName="barbican-keystone-listener-log" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.191830 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0450ba6b-8a46-4e32-aebb-2021a7d0ff8c" containerName="barbican-keystone-listener-log" Feb 18 00:53:52 crc kubenswrapper[4858]: E0218 00:53:52.191841 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6268a0a7-fb2d-437c-9a9a-3003a640f5b6" containerName="barbican-worker-log" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.191847 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6268a0a7-fb2d-437c-9a9a-3003a640f5b6" containerName="barbican-worker-log" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.192018 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6268a0a7-fb2d-437c-9a9a-3003a640f5b6" containerName="barbican-worker" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.192030 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0450ba6b-8a46-4e32-aebb-2021a7d0ff8c" containerName="barbican-keystone-listener-log" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.192043 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f69b36cb-f694-4e90-b673-47681459414b" containerName="cinder-db-sync" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.192054 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6419d4c-77e6-41c9-bcbf-e2cc5043232c" containerName="dnsmasq-dns" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.192064 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6268a0a7-fb2d-437c-9a9a-3003a640f5b6" containerName="barbican-worker-log" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.192078 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0450ba6b-8a46-4e32-aebb-2021a7d0ff8c" containerName="barbican-keystone-listener" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.194372 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.199097 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.199226 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-nt9wm" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.199369 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.203470 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.210766 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.256840 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fceda99c-5b24-470d-a686-fea2bb92d258-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fceda99c-5b24-470d-a686-fea2bb92d258\") " pod="openstack/cinder-scheduler-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.257132 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fceda99c-5b24-470d-a686-fea2bb92d258-config-data\") pod \"cinder-scheduler-0\" (UID: \"fceda99c-5b24-470d-a686-fea2bb92d258\") " pod="openstack/cinder-scheduler-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.257239 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fceda99c-5b24-470d-a686-fea2bb92d258-scripts\") pod \"cinder-scheduler-0\" (UID: \"fceda99c-5b24-470d-a686-fea2bb92d258\") " pod="openstack/cinder-scheduler-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.257316 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fceda99c-5b24-470d-a686-fea2bb92d258-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fceda99c-5b24-470d-a686-fea2bb92d258\") " pod="openstack/cinder-scheduler-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.257407 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fceda99c-5b24-470d-a686-fea2bb92d258-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fceda99c-5b24-470d-a686-fea2bb92d258\") " pod="openstack/cinder-scheduler-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.257493 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvzbc\" (UniqueName: \"kubernetes.io/projected/fceda99c-5b24-470d-a686-fea2bb92d258-kube-api-access-fvzbc\") pod \"cinder-scheduler-0\" (UID: \"fceda99c-5b24-470d-a686-fea2bb92d258\") " pod="openstack/cinder-scheduler-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.257664 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.279398 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-config-data" (OuterVolumeSpecName: "config-data") pod "0450ba6b-8a46-4e32-aebb-2021a7d0ff8c" (UID: "0450ba6b-8a46-4e32-aebb-2021a7d0ff8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.297136 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-kp8tk"] Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.304931 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.305157 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6268a0a7-fb2d-437c-9a9a-3003a640f5b6" (UID: "6268a0a7-fb2d-437c-9a9a-3003a640f5b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.309638 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-kp8tk"] Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.359906 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-kp8tk\" (UID: \"4a241bab-d126-4228-8265-fba10001ce81\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.359972 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckmng\" (UniqueName: \"kubernetes.io/projected/4a241bab-d126-4228-8265-fba10001ce81-kube-api-access-ckmng\") pod \"dnsmasq-dns-5c9776ccc5-kp8tk\" (UID: \"4a241bab-d126-4228-8265-fba10001ce81\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.359998 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-kp8tk\" (UID: \"4a241bab-d126-4228-8265-fba10001ce81\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.360039 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fceda99c-5b24-470d-a686-fea2bb92d258-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fceda99c-5b24-470d-a686-fea2bb92d258\") " pod="openstack/cinder-scheduler-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.360065 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-kp8tk\" (UID: \"4a241bab-d126-4228-8265-fba10001ce81\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.360092 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fceda99c-5b24-470d-a686-fea2bb92d258-config-data\") pod \"cinder-scheduler-0\" (UID: \"fceda99c-5b24-470d-a686-fea2bb92d258\") " pod="openstack/cinder-scheduler-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.360145 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fceda99c-5b24-470d-a686-fea2bb92d258-scripts\") pod \"cinder-scheduler-0\" (UID: \"fceda99c-5b24-470d-a686-fea2bb92d258\") " pod="openstack/cinder-scheduler-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.360162 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-kp8tk\" (UID: \"4a241bab-d126-4228-8265-fba10001ce81\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.360185 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fceda99c-5b24-470d-a686-fea2bb92d258-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fceda99c-5b24-470d-a686-fea2bb92d258\") " pod="openstack/cinder-scheduler-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.360224 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-config\") pod \"dnsmasq-dns-5c9776ccc5-kp8tk\" (UID: \"4a241bab-d126-4228-8265-fba10001ce81\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.360240 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fceda99c-5b24-470d-a686-fea2bb92d258-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fceda99c-5b24-470d-a686-fea2bb92d258\") " pod="openstack/cinder-scheduler-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.360279 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvzbc\" (UniqueName: \"kubernetes.io/projected/fceda99c-5b24-470d-a686-fea2bb92d258-kube-api-access-fvzbc\") pod \"cinder-scheduler-0\" (UID: \"fceda99c-5b24-470d-a686-fea2bb92d258\") " pod="openstack/cinder-scheduler-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.360327 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.360342 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.360849 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fceda99c-5b24-470d-a686-fea2bb92d258-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fceda99c-5b24-470d-a686-fea2bb92d258\") " pod="openstack/cinder-scheduler-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.363071 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fceda99c-5b24-470d-a686-fea2bb92d258-scripts\") pod \"cinder-scheduler-0\" (UID: \"fceda99c-5b24-470d-a686-fea2bb92d258\") " pod="openstack/cinder-scheduler-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.372582 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.373456 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fceda99c-5b24-470d-a686-fea2bb92d258-config-data\") pod \"cinder-scheduler-0\" (UID: \"fceda99c-5b24-470d-a686-fea2bb92d258\") " pod="openstack/cinder-scheduler-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.374240 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.375559 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fceda99c-5b24-470d-a686-fea2bb92d258-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fceda99c-5b24-470d-a686-fea2bb92d258\") " pod="openstack/cinder-scheduler-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.376784 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.378145 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fceda99c-5b24-470d-a686-fea2bb92d258-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fceda99c-5b24-470d-a686-fea2bb92d258\") " pod="openstack/cinder-scheduler-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.392291 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.417019 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvzbc\" (UniqueName: \"kubernetes.io/projected/fceda99c-5b24-470d-a686-fea2bb92d258-kube-api-access-fvzbc\") pod \"cinder-scheduler-0\" (UID: \"fceda99c-5b24-470d-a686-fea2bb92d258\") " pod="openstack/cinder-scheduler-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.417309 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-config-data" (OuterVolumeSpecName: "config-data") pod "6268a0a7-fb2d-437c-9a9a-3003a640f5b6" (UID: "6268a0a7-fb2d-437c-9a9a-3003a640f5b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.461039 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-cz4n9"] Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.461705 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e6d2753-32f3-48a3-9908-a82350f136dc-scripts\") pod \"cinder-api-0\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " pod="openstack/cinder-api-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.461822 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-kp8tk\" (UID: \"4a241bab-d126-4228-8265-fba10001ce81\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.461919 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-config\") pod \"dnsmasq-dns-5c9776ccc5-kp8tk\" (UID: \"4a241bab-d126-4228-8265-fba10001ce81\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.461991 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e6d2753-32f3-48a3-9908-a82350f136dc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " pod="openstack/cinder-api-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.462088 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e6d2753-32f3-48a3-9908-a82350f136dc-config-data\") pod \"cinder-api-0\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " pod="openstack/cinder-api-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.462183 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-kp8tk\" (UID: \"4a241bab-d126-4228-8265-fba10001ce81\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.462262 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckmng\" (UniqueName: \"kubernetes.io/projected/4a241bab-d126-4228-8265-fba10001ce81-kube-api-access-ckmng\") pod \"dnsmasq-dns-5c9776ccc5-kp8tk\" (UID: \"4a241bab-d126-4228-8265-fba10001ce81\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.462329 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-kp8tk\" (UID: \"4a241bab-d126-4228-8265-fba10001ce81\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.462412 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48tgr\" (UniqueName: \"kubernetes.io/projected/1e6d2753-32f3-48a3-9908-a82350f136dc-kube-api-access-48tgr\") pod \"cinder-api-0\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " pod="openstack/cinder-api-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.462484 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e6d2753-32f3-48a3-9908-a82350f136dc-config-data-custom\") pod \"cinder-api-0\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " pod="openstack/cinder-api-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.462602 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e6d2753-32f3-48a3-9908-a82350f136dc-logs\") pod \"cinder-api-0\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " pod="openstack/cinder-api-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.462676 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-kp8tk\" (UID: \"4a241bab-d126-4228-8265-fba10001ce81\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.462762 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1e6d2753-32f3-48a3-9908-a82350f136dc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " pod="openstack/cinder-api-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.462862 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6268a0a7-fb2d-437c-9a9a-3003a640f5b6-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.463650 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-kp8tk\" (UID: \"4a241bab-d126-4228-8265-fba10001ce81\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.464275 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-config\") pod \"dnsmasq-dns-5c9776ccc5-kp8tk\" (UID: \"4a241bab-d126-4228-8265-fba10001ce81\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.464447 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-kp8tk\" (UID: \"4a241bab-d126-4228-8265-fba10001ce81\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.465092 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-kp8tk\" (UID: \"4a241bab-d126-4228-8265-fba10001ce81\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.465543 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-kp8tk\" (UID: \"4a241bab-d126-4228-8265-fba10001ce81\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.494973 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckmng\" (UniqueName: \"kubernetes.io/projected/4a241bab-d126-4228-8265-fba10001ce81-kube-api-access-ckmng\") pod \"dnsmasq-dns-5c9776ccc5-kp8tk\" (UID: \"4a241bab-d126-4228-8265-fba10001ce81\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.529012 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.565158 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e6d2753-32f3-48a3-9908-a82350f136dc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " pod="openstack/cinder-api-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.565487 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e6d2753-32f3-48a3-9908-a82350f136dc-config-data\") pod \"cinder-api-0\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " pod="openstack/cinder-api-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.565565 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48tgr\" (UniqueName: \"kubernetes.io/projected/1e6d2753-32f3-48a3-9908-a82350f136dc-kube-api-access-48tgr\") pod \"cinder-api-0\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " pod="openstack/cinder-api-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.565584 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e6d2753-32f3-48a3-9908-a82350f136dc-config-data-custom\") pod \"cinder-api-0\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " pod="openstack/cinder-api-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.565610 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e6d2753-32f3-48a3-9908-a82350f136dc-logs\") pod \"cinder-api-0\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " pod="openstack/cinder-api-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.565650 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1e6d2753-32f3-48a3-9908-a82350f136dc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " pod="openstack/cinder-api-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.565681 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e6d2753-32f3-48a3-9908-a82350f136dc-scripts\") pod \"cinder-api-0\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " pod="openstack/cinder-api-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.576927 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1e6d2753-32f3-48a3-9908-a82350f136dc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " pod="openstack/cinder-api-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.577204 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e6d2753-32f3-48a3-9908-a82350f136dc-logs\") pod \"cinder-api-0\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " pod="openstack/cinder-api-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.580911 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e6d2753-32f3-48a3-9908-a82350f136dc-scripts\") pod \"cinder-api-0\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " pod="openstack/cinder-api-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.582865 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e6d2753-32f3-48a3-9908-a82350f136dc-config-data\") pod \"cinder-api-0\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " pod="openstack/cinder-api-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.585219 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e6d2753-32f3-48a3-9908-a82350f136dc-config-data-custom\") pod \"cinder-api-0\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " pod="openstack/cinder-api-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.586859 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e6d2753-32f3-48a3-9908-a82350f136dc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " pod="openstack/cinder-api-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.600067 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48tgr\" (UniqueName: \"kubernetes.io/projected/1e6d2753-32f3-48a3-9908-a82350f136dc-kube-api-access-48tgr\") pod \"cinder-api-0\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " pod="openstack/cinder-api-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.638779 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-684c84b858-zh69p" event={"ID":"0450ba6b-8a46-4e32-aebb-2021a7d0ff8c","Type":"ContainerDied","Data":"b978d50cde38cad44007fcf7899a85be300ecec1c5a80e6a76451602de92b163"} Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.638830 4858 scope.go:117] "RemoveContainer" containerID="fa36d5c24517e6823502a114c70e18f84603b0e86bcb588149fa50d929811f1d" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.638942 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-684c84b858-zh69p" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.640444 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.664557 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4770fea7-6a1a-44e8-bebe-09220dbc4c71","Type":"ContainerStarted","Data":"5d441e0136063f116e112dcecb10719e40537d0165766b2a666a68eba2d7814c"} Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.664712 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4770fea7-6a1a-44e8-bebe-09220dbc4c71" containerName="ceilometer-central-agent" containerID="cri-o://ffebf633b81c91d1f3d8ee0291a010200e29d37b5d8bf7f17dbc885d34112dc3" gracePeriod=30 Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.664938 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.665159 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4770fea7-6a1a-44e8-bebe-09220dbc4c71" containerName="proxy-httpd" containerID="cri-o://5d441e0136063f116e112dcecb10719e40537d0165766b2a666a68eba2d7814c" gracePeriod=30 Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.665208 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4770fea7-6a1a-44e8-bebe-09220dbc4c71" containerName="sg-core" containerID="cri-o://5e97e81cbdb2397d1acc161131ae6b1fc3124d6193fbfe8b4de2d7641ce2d09d" gracePeriod=30 Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.665242 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4770fea7-6a1a-44e8-bebe-09220dbc4c71" containerName="ceilometer-notification-agent" containerID="cri-o://32988669988c871a59910744c626cb00c849036db3f2e8d590654c6856162836" gracePeriod=30 Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.671238 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-cz4n9" event={"ID":"cbda3331-08bc-49a1-8cf2-f24700bf4a89","Type":"ContainerStarted","Data":"47ec27fd74942ae39b8d573cc9d87e283ce1e223a5edd9d29e36dbdf5ed8af97"} Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.678041 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-79f994c65-x27nl"] Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.717059 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.719345 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d98494dc7-ncwkj" event={"ID":"6268a0a7-fb2d-437c-9a9a-3003a640f5b6","Type":"ContainerDied","Data":"29a75fe9412d48cb1767ff98939a8834f088f21db33f8271c11aeb4380d48b82"} Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.719455 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5d98494dc7-ncwkj" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.730677 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.550347803 podStartE2EDuration="1m7.730620677s" podCreationTimestamp="2026-02-18 00:52:45 +0000 UTC" firstStartedPulling="2026-02-18 00:52:46.713883669 +0000 UTC m=+1120.019720401" lastFinishedPulling="2026-02-18 00:53:51.894156533 +0000 UTC m=+1185.199993275" observedRunningTime="2026-02-18 00:53:52.718189386 +0000 UTC m=+1186.024026118" watchObservedRunningTime="2026-02-18 00:53:52.730620677 +0000 UTC m=+1186.036457429" Feb 18 00:53:52 crc kubenswrapper[4858]: I0218 00:53:52.987120 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-684c84b858-zh69p"] Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.011591 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-684c84b858-zh69p"] Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.013277 4858 scope.go:117] "RemoveContainer" containerID="df5a1baecb296a9ddf03fb43d34534bf5773dc241d3cde3a2fd6d6646e31c71f" Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.026862 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5d98494dc7-ncwkj"] Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.093872 4858 scope.go:117] "RemoveContainer" containerID="0fc8bfe7568342dac651c6944925ace9b5f8d06aec784d3e13411d6f5e6b1863" Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.105132 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-5d98494dc7-ncwkj"] Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.182776 4858 scope.go:117] "RemoveContainer" containerID="078b0d6c957956baffa38ea01d8c4953b78710475a473a7d46b4359ed37b1510" Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.216764 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 00:53:53 crc kubenswrapper[4858]: W0218 00:53:53.239747 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfceda99c_5b24_470d_a686_fea2bb92d258.slice/crio-75f03d39772eeb98499ecd24aaf1c34d357bbc8b086fda7adbadd9596c3c6639 WatchSource:0}: Error finding container 75f03d39772eeb98499ecd24aaf1c34d357bbc8b086fda7adbadd9596c3c6639: Status 404 returned error can't find the container with id 75f03d39772eeb98499ecd24aaf1c34d357bbc8b086fda7adbadd9596c3c6639 Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.249601 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.408012 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-kp8tk"] Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.491694 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0450ba6b-8a46-4e32-aebb-2021a7d0ff8c" path="/var/lib/kubelet/pods/0450ba6b-8a46-4e32-aebb-2021a7d0ff8c/volumes" Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.492316 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6268a0a7-fb2d-437c-9a9a-3003a640f5b6" path="/var/lib/kubelet/pods/6268a0a7-fb2d-437c-9a9a-3003a640f5b6/volumes" Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.492887 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6419d4c-77e6-41c9-bcbf-e2cc5043232c" path="/var/lib/kubelet/pods/d6419d4c-77e6-41c9-bcbf-e2cc5043232c/volumes" Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.550536 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.803837 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79f994c65-x27nl" event={"ID":"d1f825a6-aa98-4e73-a29c-4b829bf606d6","Type":"ContainerStarted","Data":"b79c40f4529e6e71d7c2e8dec4d3811c75adea655954de877d8449ab71654904"} Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.804088 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79f994c65-x27nl" event={"ID":"d1f825a6-aa98-4e73-a29c-4b829bf606d6","Type":"ContainerStarted","Data":"69b8c0d5d24b76979ad17c7ff862215ceac978387e5c83060775e8b14d0dec08"} Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.804101 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79f994c65-x27nl" event={"ID":"d1f825a6-aa98-4e73-a29c-4b829bf606d6","Type":"ContainerStarted","Data":"c4a73eb2084fa1cf5d08f9c7dbaa6dd6adb4b4ae3a56735d3c1fe3b39efc6d6d"} Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.805243 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-79f994c65-x27nl" Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.817210 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" event={"ID":"4a241bab-d126-4228-8265-fba10001ce81","Type":"ContainerStarted","Data":"9e9a69d422f99c1126c2e90aab3b66a34d1580350cdc75991af378a8a307a7c9"} Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.842108 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-79f994c65-x27nl" podStartSLOduration=5.842093604 podStartE2EDuration="5.842093604s" podCreationTimestamp="2026-02-18 00:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:53:53.840036914 +0000 UTC m=+1187.145873646" watchObservedRunningTime="2026-02-18 00:53:53.842093604 +0000 UTC m=+1187.147930336" Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.842641 4858 generic.go:334] "Generic (PLEG): container finished" podID="4770fea7-6a1a-44e8-bebe-09220dbc4c71" containerID="5d441e0136063f116e112dcecb10719e40537d0165766b2a666a68eba2d7814c" exitCode=0 Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.842668 4858 generic.go:334] "Generic (PLEG): container finished" podID="4770fea7-6a1a-44e8-bebe-09220dbc4c71" containerID="5e97e81cbdb2397d1acc161131ae6b1fc3124d6193fbfe8b4de2d7641ce2d09d" exitCode=2 Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.842676 4858 generic.go:334] "Generic (PLEG): container finished" podID="4770fea7-6a1a-44e8-bebe-09220dbc4c71" containerID="ffebf633b81c91d1f3d8ee0291a010200e29d37b5d8bf7f17dbc885d34112dc3" exitCode=0 Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.842713 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4770fea7-6a1a-44e8-bebe-09220dbc4c71","Type":"ContainerDied","Data":"5d441e0136063f116e112dcecb10719e40537d0165766b2a666a68eba2d7814c"} Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.842739 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4770fea7-6a1a-44e8-bebe-09220dbc4c71","Type":"ContainerDied","Data":"5e97e81cbdb2397d1acc161131ae6b1fc3124d6193fbfe8b4de2d7641ce2d09d"} Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.842749 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4770fea7-6a1a-44e8-bebe-09220dbc4c71","Type":"ContainerDied","Data":"ffebf633b81c91d1f3d8ee0291a010200e29d37b5d8bf7f17dbc885d34112dc3"} Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.844144 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fceda99c-5b24-470d-a686-fea2bb92d258","Type":"ContainerStarted","Data":"75f03d39772eeb98499ecd24aaf1c34d357bbc8b086fda7adbadd9596c3c6639"} Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.869848 4858 generic.go:334] "Generic (PLEG): container finished" podID="f15c5e19-1645-4791-8981-2216c5be654b" containerID="2cdc75157a3e22acce190039d63927731744843656fae2003afbeb33ed574364" exitCode=0 Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.869918 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f78847c8f-hz7xt" event={"ID":"f15c5e19-1645-4791-8981-2216c5be654b","Type":"ContainerDied","Data":"2cdc75157a3e22acce190039d63927731744843656fae2003afbeb33ed574364"} Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.880757 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-cz4n9" event={"ID":"cbda3331-08bc-49a1-8cf2-f24700bf4a89","Type":"ContainerStarted","Data":"91a75a5f520b435c29457529ec0ca3a2704faf832666424082ed19ae90bc5a4b"} Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.925243 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1e6d2753-32f3-48a3-9908-a82350f136dc","Type":"ContainerStarted","Data":"4a8d715c74e37b4a39486bc4a6034f5fcb0acd1075be4b0db9e2f97419e3ce94"} Feb 18 00:53:53 crc kubenswrapper[4858]: I0218 00:53:53.927518 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-cz4n9" podStartSLOduration=3.927483333 podStartE2EDuration="3.927483333s" podCreationTimestamp="2026-02-18 00:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:53:53.906527875 +0000 UTC m=+1187.212364607" watchObservedRunningTime="2026-02-18 00:53:53.927483333 +0000 UTC m=+1187.233320055" Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.284197 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f78847c8f-hz7xt" Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.418782 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-config\") pod \"f15c5e19-1645-4791-8981-2216c5be654b\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.418858 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfgvb\" (UniqueName: \"kubernetes.io/projected/f15c5e19-1645-4791-8981-2216c5be654b-kube-api-access-kfgvb\") pod \"f15c5e19-1645-4791-8981-2216c5be654b\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.418923 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-public-tls-certs\") pod \"f15c5e19-1645-4791-8981-2216c5be654b\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.418972 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-httpd-config\") pod \"f15c5e19-1645-4791-8981-2216c5be654b\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.419000 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-combined-ca-bundle\") pod \"f15c5e19-1645-4791-8981-2216c5be654b\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.419040 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-internal-tls-certs\") pod \"f15c5e19-1645-4791-8981-2216c5be654b\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.419061 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-ovndb-tls-certs\") pod \"f15c5e19-1645-4791-8981-2216c5be654b\" (UID: \"f15c5e19-1645-4791-8981-2216c5be654b\") " Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.426749 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.428334 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f15c5e19-1645-4791-8981-2216c5be654b-kube-api-access-kfgvb" (OuterVolumeSpecName: "kube-api-access-kfgvb") pod "f15c5e19-1645-4791-8981-2216c5be654b" (UID: "f15c5e19-1645-4791-8981-2216c5be654b"). InnerVolumeSpecName "kube-api-access-kfgvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.428423 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "f15c5e19-1645-4791-8981-2216c5be654b" (UID: "f15c5e19-1645-4791-8981-2216c5be654b"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.513648 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f15c5e19-1645-4791-8981-2216c5be654b" (UID: "f15c5e19-1645-4791-8981-2216c5be654b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.522157 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f15c5e19-1645-4791-8981-2216c5be654b" (UID: "f15c5e19-1645-4791-8981-2216c5be654b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.523719 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfgvb\" (UniqueName: \"kubernetes.io/projected/f15c5e19-1645-4791-8981-2216c5be654b-kube-api-access-kfgvb\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.523748 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.523761 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.523773 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.542340 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-config" (OuterVolumeSpecName: "config") pod "f15c5e19-1645-4791-8981-2216c5be654b" (UID: "f15c5e19-1645-4791-8981-2216c5be654b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.591619 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f15c5e19-1645-4791-8981-2216c5be654b" (UID: "f15c5e19-1645-4791-8981-2216c5be654b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.591639 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "f15c5e19-1645-4791-8981-2216c5be654b" (UID: "f15c5e19-1645-4791-8981-2216c5be654b"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.625226 4858 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.625479 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.625490 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f15c5e19-1645-4791-8981-2216c5be654b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.799134 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.855013 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-674dbc688d-knngw" Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.934147 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-657d9bcf46-vrpmh"] Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.934352 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-657d9bcf46-vrpmh" podUID="283581c7-0894-47b6-b933-f36c54f50e4b" containerName="barbican-api-log" containerID="cri-o://1e29ff7118bc4c5d05e46e07bdeddc3ef4d7f1fd2a6e78502f9581f673f7d528" gracePeriod=30 Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.934596 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-657d9bcf46-vrpmh" podUID="283581c7-0894-47b6-b933-f36c54f50e4b" containerName="barbican-api" containerID="cri-o://4e882c9e97e0b550feb3ab78f0cdf7b99e372d900759076f0c2c8dd00c543301" gracePeriod=30 Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.981312 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1e6d2753-32f3-48a3-9908-a82350f136dc","Type":"ContainerStarted","Data":"3f265aca5779bea6b29bb21912cf73581ab48cd0f41876743bdef54de473abce"} Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.990224 4858 generic.go:334] "Generic (PLEG): container finished" podID="4a241bab-d126-4228-8265-fba10001ce81" containerID="53d90ef497488b6955e45ee595828485ea702336efdcd93e5079610927246f91" exitCode=0 Feb 18 00:53:54 crc kubenswrapper[4858]: I0218 00:53:54.990287 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" event={"ID":"4a241bab-d126-4228-8265-fba10001ce81","Type":"ContainerDied","Data":"53d90ef497488b6955e45ee595828485ea702336efdcd93e5079610927246f91"} Feb 18 00:53:55 crc kubenswrapper[4858]: I0218 00:53:55.001647 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f78847c8f-hz7xt" event={"ID":"f15c5e19-1645-4791-8981-2216c5be654b","Type":"ContainerDied","Data":"cef7714b2664ce0abdce5e860eb77b1e5e1d954e5268f995a053027d4bd06ba8"} Feb 18 00:53:55 crc kubenswrapper[4858]: I0218 00:53:55.001730 4858 scope.go:117] "RemoveContainer" containerID="e498507e534f9b9712c7af1d4443f81de4bc7b4ffabd92ef3be6b82a7eda3f54" Feb 18 00:53:55 crc kubenswrapper[4858]: I0218 00:53:55.001959 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f78847c8f-hz7xt" Feb 18 00:53:55 crc kubenswrapper[4858]: I0218 00:53:55.124843 4858 scope.go:117] "RemoveContainer" containerID="2cdc75157a3e22acce190039d63927731744843656fae2003afbeb33ed574364" Feb 18 00:53:55 crc kubenswrapper[4858]: I0218 00:53:55.146272 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6f78847c8f-hz7xt"] Feb 18 00:53:55 crc kubenswrapper[4858]: I0218 00:53:55.161295 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6f78847c8f-hz7xt"] Feb 18 00:53:55 crc kubenswrapper[4858]: I0218 00:53:55.433210 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f15c5e19-1645-4791-8981-2216c5be654b" path="/var/lib/kubelet/pods/f15c5e19-1645-4791-8981-2216c5be654b/volumes" Feb 18 00:53:56 crc kubenswrapper[4858]: I0218 00:53:56.038201 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1e6d2753-32f3-48a3-9908-a82350f136dc","Type":"ContainerStarted","Data":"377325ddb4b3feb7046ed085c20ae24ce7067df4c3cdb079783ee2acbaa98d07"} Feb 18 00:53:56 crc kubenswrapper[4858]: I0218 00:53:56.039625 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="1e6d2753-32f3-48a3-9908-a82350f136dc" containerName="cinder-api-log" containerID="cri-o://3f265aca5779bea6b29bb21912cf73581ab48cd0f41876743bdef54de473abce" gracePeriod=30 Feb 18 00:53:56 crc kubenswrapper[4858]: I0218 00:53:56.039859 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="1e6d2753-32f3-48a3-9908-a82350f136dc" containerName="cinder-api" containerID="cri-o://377325ddb4b3feb7046ed085c20ae24ce7067df4c3cdb079783ee2acbaa98d07" gracePeriod=30 Feb 18 00:53:56 crc kubenswrapper[4858]: I0218 00:53:56.040026 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 00:53:56 crc kubenswrapper[4858]: I0218 00:53:56.057599 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" event={"ID":"4a241bab-d126-4228-8265-fba10001ce81","Type":"ContainerStarted","Data":"2f315bcf65f4c1d5155fb5b77f6c37a3b9986fab450e88f0b957015ce5b9c8dd"} Feb 18 00:53:56 crc kubenswrapper[4858]: I0218 00:53:56.065333 4858 generic.go:334] "Generic (PLEG): container finished" podID="283581c7-0894-47b6-b933-f36c54f50e4b" containerID="1e29ff7118bc4c5d05e46e07bdeddc3ef4d7f1fd2a6e78502f9581f673f7d528" exitCode=143 Feb 18 00:53:56 crc kubenswrapper[4858]: I0218 00:53:56.065393 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-657d9bcf46-vrpmh" event={"ID":"283581c7-0894-47b6-b933-f36c54f50e4b","Type":"ContainerDied","Data":"1e29ff7118bc4c5d05e46e07bdeddc3ef4d7f1fd2a6e78502f9581f673f7d528"} Feb 18 00:53:56 crc kubenswrapper[4858]: I0218 00:53:56.067061 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.067029716 podStartE2EDuration="4.067029716s" podCreationTimestamp="2026-02-18 00:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:53:56.060566329 +0000 UTC m=+1189.366403061" watchObservedRunningTime="2026-02-18 00:53:56.067029716 +0000 UTC m=+1189.372866448" Feb 18 00:53:56 crc kubenswrapper[4858]: I0218 00:53:56.076807 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fceda99c-5b24-470d-a686-fea2bb92d258","Type":"ContainerStarted","Data":"1503d174cb41d7833714ca4adc7b94dc210f047cc4ffdc62c2d191be079f3ca5"} Feb 18 00:53:56 crc kubenswrapper[4858]: I0218 00:53:56.091701 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" podStartSLOduration=4.091681594 podStartE2EDuration="4.091681594s" podCreationTimestamp="2026-02-18 00:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:53:56.090369142 +0000 UTC m=+1189.396205874" watchObservedRunningTime="2026-02-18 00:53:56.091681594 +0000 UTC m=+1189.397518326" Feb 18 00:53:57 crc kubenswrapper[4858]: I0218 00:53:57.089622 4858 generic.go:334] "Generic (PLEG): container finished" podID="cbda3331-08bc-49a1-8cf2-f24700bf4a89" containerID="91a75a5f520b435c29457529ec0ca3a2704faf832666424082ed19ae90bc5a4b" exitCode=0 Feb 18 00:53:57 crc kubenswrapper[4858]: I0218 00:53:57.089722 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-cz4n9" event={"ID":"cbda3331-08bc-49a1-8cf2-f24700bf4a89","Type":"ContainerDied","Data":"91a75a5f520b435c29457529ec0ca3a2704faf832666424082ed19ae90bc5a4b"} Feb 18 00:53:57 crc kubenswrapper[4858]: I0218 00:53:57.092238 4858 generic.go:334] "Generic (PLEG): container finished" podID="1e6d2753-32f3-48a3-9908-a82350f136dc" containerID="3f265aca5779bea6b29bb21912cf73581ab48cd0f41876743bdef54de473abce" exitCode=143 Feb 18 00:53:57 crc kubenswrapper[4858]: I0218 00:53:57.092351 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1e6d2753-32f3-48a3-9908-a82350f136dc","Type":"ContainerDied","Data":"3f265aca5779bea6b29bb21912cf73581ab48cd0f41876743bdef54de473abce"} Feb 18 00:53:57 crc kubenswrapper[4858]: I0218 00:53:57.095535 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fceda99c-5b24-470d-a686-fea2bb92d258","Type":"ContainerStarted","Data":"3faf8f606e7c009e3df4f356fbb639e7d21dd238168286d18c84f196d211fa84"} Feb 18 00:53:57 crc kubenswrapper[4858]: I0218 00:53:57.095584 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" Feb 18 00:53:57 crc kubenswrapper[4858]: I0218 00:53:57.147376 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.170810642 podStartE2EDuration="5.147352979s" podCreationTimestamp="2026-02-18 00:53:52 +0000 UTC" firstStartedPulling="2026-02-18 00:53:53.249367165 +0000 UTC m=+1186.555203897" lastFinishedPulling="2026-02-18 00:53:54.225909502 +0000 UTC m=+1187.531746234" observedRunningTime="2026-02-18 00:53:57.134806635 +0000 UTC m=+1190.440643377" watchObservedRunningTime="2026-02-18 00:53:57.147352979 +0000 UTC m=+1190.453189721" Feb 18 00:53:57 crc kubenswrapper[4858]: I0218 00:53:57.529692 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.109000 4858 generic.go:334] "Generic (PLEG): container finished" podID="4770fea7-6a1a-44e8-bebe-09220dbc4c71" containerID="32988669988c871a59910744c626cb00c849036db3f2e8d590654c6856162836" exitCode=0 Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.109433 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4770fea7-6a1a-44e8-bebe-09220dbc4c71","Type":"ContainerDied","Data":"32988669988c871a59910744c626cb00c849036db3f2e8d590654c6856162836"} Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.390143 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.427459 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4770fea7-6a1a-44e8-bebe-09220dbc4c71-config-data\") pod \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.427513 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4770fea7-6a1a-44e8-bebe-09220dbc4c71-sg-core-conf-yaml\") pod \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.427554 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh4kz\" (UniqueName: \"kubernetes.io/projected/4770fea7-6a1a-44e8-bebe-09220dbc4c71-kube-api-access-mh4kz\") pod \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.427584 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4770fea7-6a1a-44e8-bebe-09220dbc4c71-combined-ca-bundle\") pod \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.427624 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4770fea7-6a1a-44e8-bebe-09220dbc4c71-run-httpd\") pod \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.427641 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4770fea7-6a1a-44e8-bebe-09220dbc4c71-scripts\") pod \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.427677 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4770fea7-6a1a-44e8-bebe-09220dbc4c71-log-httpd\") pod \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\" (UID: \"4770fea7-6a1a-44e8-bebe-09220dbc4c71\") " Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.428427 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4770fea7-6a1a-44e8-bebe-09220dbc4c71-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4770fea7-6a1a-44e8-bebe-09220dbc4c71" (UID: "4770fea7-6a1a-44e8-bebe-09220dbc4c71"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.428661 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4770fea7-6a1a-44e8-bebe-09220dbc4c71-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4770fea7-6a1a-44e8-bebe-09220dbc4c71" (UID: "4770fea7-6a1a-44e8-bebe-09220dbc4c71"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.432697 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-657d9bcf46-vrpmh" podUID="283581c7-0894-47b6-b933-f36c54f50e4b" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": read tcp 10.217.0.2:60854->10.217.0.179:9311: read: connection reset by peer" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.432899 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-657d9bcf46-vrpmh" podUID="283581c7-0894-47b6-b933-f36c54f50e4b" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": read tcp 10.217.0.2:60842->10.217.0.179:9311: read: connection reset by peer" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.435919 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4770fea7-6a1a-44e8-bebe-09220dbc4c71-scripts" (OuterVolumeSpecName: "scripts") pod "4770fea7-6a1a-44e8-bebe-09220dbc4c71" (UID: "4770fea7-6a1a-44e8-bebe-09220dbc4c71"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.457726 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4770fea7-6a1a-44e8-bebe-09220dbc4c71-kube-api-access-mh4kz" (OuterVolumeSpecName: "kube-api-access-mh4kz") pod "4770fea7-6a1a-44e8-bebe-09220dbc4c71" (UID: "4770fea7-6a1a-44e8-bebe-09220dbc4c71"). InnerVolumeSpecName "kube-api-access-mh4kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.480515 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4770fea7-6a1a-44e8-bebe-09220dbc4c71-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4770fea7-6a1a-44e8-bebe-09220dbc4c71" (UID: "4770fea7-6a1a-44e8-bebe-09220dbc4c71"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.529952 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4770fea7-6a1a-44e8-bebe-09220dbc4c71-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.529984 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4770fea7-6a1a-44e8-bebe-09220dbc4c71-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.529996 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4770fea7-6a1a-44e8-bebe-09220dbc4c71-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.530008 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4770fea7-6a1a-44e8-bebe-09220dbc4c71-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.530021 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh4kz\" (UniqueName: \"kubernetes.io/projected/4770fea7-6a1a-44e8-bebe-09220dbc4c71-kube-api-access-mh4kz\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.544205 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4770fea7-6a1a-44e8-bebe-09220dbc4c71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4770fea7-6a1a-44e8-bebe-09220dbc4c71" (UID: "4770fea7-6a1a-44e8-bebe-09220dbc4c71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.566669 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4770fea7-6a1a-44e8-bebe-09220dbc4c71-config-data" (OuterVolumeSpecName: "config-data") pod "4770fea7-6a1a-44e8-bebe-09220dbc4c71" (UID: "4770fea7-6a1a-44e8-bebe-09220dbc4c71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.605133 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-cz4n9" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.631318 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbda3331-08bc-49a1-8cf2-f24700bf4a89-scripts\") pod \"cbda3331-08bc-49a1-8cf2-f24700bf4a89\" (UID: \"cbda3331-08bc-49a1-8cf2-f24700bf4a89\") " Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.631518 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbda3331-08bc-49a1-8cf2-f24700bf4a89-config-data\") pod \"cbda3331-08bc-49a1-8cf2-f24700bf4a89\" (UID: \"cbda3331-08bc-49a1-8cf2-f24700bf4a89\") " Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.631600 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbda3331-08bc-49a1-8cf2-f24700bf4a89-combined-ca-bundle\") pod \"cbda3331-08bc-49a1-8cf2-f24700bf4a89\" (UID: \"cbda3331-08bc-49a1-8cf2-f24700bf4a89\") " Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.631654 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cbda3331-08bc-49a1-8cf2-f24700bf4a89-certs\") pod \"cbda3331-08bc-49a1-8cf2-f24700bf4a89\" (UID: \"cbda3331-08bc-49a1-8cf2-f24700bf4a89\") " Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.631729 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mngv7\" (UniqueName: \"kubernetes.io/projected/cbda3331-08bc-49a1-8cf2-f24700bf4a89-kube-api-access-mngv7\") pod \"cbda3331-08bc-49a1-8cf2-f24700bf4a89\" (UID: \"cbda3331-08bc-49a1-8cf2-f24700bf4a89\") " Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.632162 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4770fea7-6a1a-44e8-bebe-09220dbc4c71-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.632176 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4770fea7-6a1a-44e8-bebe-09220dbc4c71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.638063 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbda3331-08bc-49a1-8cf2-f24700bf4a89-scripts" (OuterVolumeSpecName: "scripts") pod "cbda3331-08bc-49a1-8cf2-f24700bf4a89" (UID: "cbda3331-08bc-49a1-8cf2-f24700bf4a89"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.642150 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbda3331-08bc-49a1-8cf2-f24700bf4a89-certs" (OuterVolumeSpecName: "certs") pod "cbda3331-08bc-49a1-8cf2-f24700bf4a89" (UID: "cbda3331-08bc-49a1-8cf2-f24700bf4a89"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.647750 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbda3331-08bc-49a1-8cf2-f24700bf4a89-kube-api-access-mngv7" (OuterVolumeSpecName: "kube-api-access-mngv7") pod "cbda3331-08bc-49a1-8cf2-f24700bf4a89" (UID: "cbda3331-08bc-49a1-8cf2-f24700bf4a89"). InnerVolumeSpecName "kube-api-access-mngv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.661570 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbda3331-08bc-49a1-8cf2-f24700bf4a89-config-data" (OuterVolumeSpecName: "config-data") pod "cbda3331-08bc-49a1-8cf2-f24700bf4a89" (UID: "cbda3331-08bc-49a1-8cf2-f24700bf4a89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.663963 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbda3331-08bc-49a1-8cf2-f24700bf4a89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbda3331-08bc-49a1-8cf2-f24700bf4a89" (UID: "cbda3331-08bc-49a1-8cf2-f24700bf4a89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.734222 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbda3331-08bc-49a1-8cf2-f24700bf4a89-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.734537 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbda3331-08bc-49a1-8cf2-f24700bf4a89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.734549 4858 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cbda3331-08bc-49a1-8cf2-f24700bf4a89-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.734558 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mngv7\" (UniqueName: \"kubernetes.io/projected/cbda3331-08bc-49a1-8cf2-f24700bf4a89-kube-api-access-mngv7\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.734567 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbda3331-08bc-49a1-8cf2-f24700bf4a89-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.835869 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-657d9bcf46-vrpmh" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.938087 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283581c7-0894-47b6-b933-f36c54f50e4b-combined-ca-bundle\") pod \"283581c7-0894-47b6-b933-f36c54f50e4b\" (UID: \"283581c7-0894-47b6-b933-f36c54f50e4b\") " Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.938331 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/283581c7-0894-47b6-b933-f36c54f50e4b-logs\") pod \"283581c7-0894-47b6-b933-f36c54f50e4b\" (UID: \"283581c7-0894-47b6-b933-f36c54f50e4b\") " Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.938377 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/283581c7-0894-47b6-b933-f36c54f50e4b-config-data-custom\") pod \"283581c7-0894-47b6-b933-f36c54f50e4b\" (UID: \"283581c7-0894-47b6-b933-f36c54f50e4b\") " Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.938408 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwbbh\" (UniqueName: \"kubernetes.io/projected/283581c7-0894-47b6-b933-f36c54f50e4b-kube-api-access-wwbbh\") pod \"283581c7-0894-47b6-b933-f36c54f50e4b\" (UID: \"283581c7-0894-47b6-b933-f36c54f50e4b\") " Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.938452 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/283581c7-0894-47b6-b933-f36c54f50e4b-config-data\") pod \"283581c7-0894-47b6-b933-f36c54f50e4b\" (UID: \"283581c7-0894-47b6-b933-f36c54f50e4b\") " Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.941759 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/283581c7-0894-47b6-b933-f36c54f50e4b-logs" (OuterVolumeSpecName: "logs") pod "283581c7-0894-47b6-b933-f36c54f50e4b" (UID: "283581c7-0894-47b6-b933-f36c54f50e4b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.947020 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/283581c7-0894-47b6-b933-f36c54f50e4b-kube-api-access-wwbbh" (OuterVolumeSpecName: "kube-api-access-wwbbh") pod "283581c7-0894-47b6-b933-f36c54f50e4b" (UID: "283581c7-0894-47b6-b933-f36c54f50e4b"). InnerVolumeSpecName "kube-api-access-wwbbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.962733 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/283581c7-0894-47b6-b933-f36c54f50e4b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "283581c7-0894-47b6-b933-f36c54f50e4b" (UID: "283581c7-0894-47b6-b933-f36c54f50e4b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:58 crc kubenswrapper[4858]: I0218 00:53:58.987730 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/283581c7-0894-47b6-b933-f36c54f50e4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "283581c7-0894-47b6-b933-f36c54f50e4b" (UID: "283581c7-0894-47b6-b933-f36c54f50e4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.041112 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/283581c7-0894-47b6-b933-f36c54f50e4b-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.041143 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/283581c7-0894-47b6-b933-f36c54f50e4b-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.041156 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwbbh\" (UniqueName: \"kubernetes.io/projected/283581c7-0894-47b6-b933-f36c54f50e4b-kube-api-access-wwbbh\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.041165 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283581c7-0894-47b6-b933-f36c54f50e4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.104820 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/283581c7-0894-47b6-b933-f36c54f50e4b-config-data" (OuterVolumeSpecName: "config-data") pod "283581c7-0894-47b6-b933-f36c54f50e4b" (UID: "283581c7-0894-47b6-b933-f36c54f50e4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.123862 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4770fea7-6a1a-44e8-bebe-09220dbc4c71","Type":"ContainerDied","Data":"4003f01bef0b8b7a8162f1ac6ddd466bfac47648b721c664b2e4bdc6b1b0d51f"} Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.123906 4858 scope.go:117] "RemoveContainer" containerID="5d441e0136063f116e112dcecb10719e40537d0165766b2a666a68eba2d7814c" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.124019 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.134892 4858 generic.go:334] "Generic (PLEG): container finished" podID="283581c7-0894-47b6-b933-f36c54f50e4b" containerID="4e882c9e97e0b550feb3ab78f0cdf7b99e372d900759076f0c2c8dd00c543301" exitCode=0 Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.135036 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-657d9bcf46-vrpmh" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.135583 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-657d9bcf46-vrpmh" event={"ID":"283581c7-0894-47b6-b933-f36c54f50e4b","Type":"ContainerDied","Data":"4e882c9e97e0b550feb3ab78f0cdf7b99e372d900759076f0c2c8dd00c543301"} Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.135638 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-657d9bcf46-vrpmh" event={"ID":"283581c7-0894-47b6-b933-f36c54f50e4b","Type":"ContainerDied","Data":"181d030cc9d1105f15031919a41b3b1910a06865bedd5963842afb47904d9608"} Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.137628 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-cz4n9" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.137865 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-cz4n9" event={"ID":"cbda3331-08bc-49a1-8cf2-f24700bf4a89","Type":"ContainerDied","Data":"47ec27fd74942ae39b8d573cc9d87e283ce1e223a5edd9d29e36dbdf5ed8af97"} Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.137965 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47ec27fd74942ae39b8d573cc9d87e283ce1e223a5edd9d29e36dbdf5ed8af97" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.146298 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/283581c7-0894-47b6-b933-f36c54f50e4b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.202909 4858 scope.go:117] "RemoveContainer" containerID="5e97e81cbdb2397d1acc161131ae6b1fc3124d6193fbfe8b4de2d7641ce2d09d" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.225334 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-657d9bcf46-vrpmh"] Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.253400 4858 scope.go:117] "RemoveContainer" containerID="32988669988c871a59910744c626cb00c849036db3f2e8d590654c6856162836" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.263590 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-657d9bcf46-vrpmh"] Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.307792 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.311313 4858 scope.go:117] "RemoveContainer" containerID="ffebf633b81c91d1f3d8ee0291a010200e29d37b5d8bf7f17dbc885d34112dc3" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.316008 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.332553 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:53:59 crc kubenswrapper[4858]: E0218 00:53:59.333191 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4770fea7-6a1a-44e8-bebe-09220dbc4c71" containerName="proxy-httpd" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.333203 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4770fea7-6a1a-44e8-bebe-09220dbc4c71" containerName="proxy-httpd" Feb 18 00:53:59 crc kubenswrapper[4858]: E0218 00:53:59.333213 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbda3331-08bc-49a1-8cf2-f24700bf4a89" containerName="cloudkitty-storageinit" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.333219 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbda3331-08bc-49a1-8cf2-f24700bf4a89" containerName="cloudkitty-storageinit" Feb 18 00:53:59 crc kubenswrapper[4858]: E0218 00:53:59.333236 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4770fea7-6a1a-44e8-bebe-09220dbc4c71" containerName="ceilometer-notification-agent" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.333242 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4770fea7-6a1a-44e8-bebe-09220dbc4c71" containerName="ceilometer-notification-agent" Feb 18 00:53:59 crc kubenswrapper[4858]: E0218 00:53:59.333253 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4770fea7-6a1a-44e8-bebe-09220dbc4c71" containerName="sg-core" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.333259 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4770fea7-6a1a-44e8-bebe-09220dbc4c71" containerName="sg-core" Feb 18 00:53:59 crc kubenswrapper[4858]: E0218 00:53:59.333271 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f15c5e19-1645-4791-8981-2216c5be654b" containerName="neutron-api" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.333278 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f15c5e19-1645-4791-8981-2216c5be654b" containerName="neutron-api" Feb 18 00:53:59 crc kubenswrapper[4858]: E0218 00:53:59.333288 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="283581c7-0894-47b6-b933-f36c54f50e4b" containerName="barbican-api" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.333293 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="283581c7-0894-47b6-b933-f36c54f50e4b" containerName="barbican-api" Feb 18 00:53:59 crc kubenswrapper[4858]: E0218 00:53:59.333304 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f15c5e19-1645-4791-8981-2216c5be654b" containerName="neutron-httpd" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.333309 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f15c5e19-1645-4791-8981-2216c5be654b" containerName="neutron-httpd" Feb 18 00:53:59 crc kubenswrapper[4858]: E0218 00:53:59.333315 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="283581c7-0894-47b6-b933-f36c54f50e4b" containerName="barbican-api-log" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.333322 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="283581c7-0894-47b6-b933-f36c54f50e4b" containerName="barbican-api-log" Feb 18 00:53:59 crc kubenswrapper[4858]: E0218 00:53:59.333339 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4770fea7-6a1a-44e8-bebe-09220dbc4c71" containerName="ceilometer-central-agent" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.333345 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4770fea7-6a1a-44e8-bebe-09220dbc4c71" containerName="ceilometer-central-agent" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.333583 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4770fea7-6a1a-44e8-bebe-09220dbc4c71" containerName="proxy-httpd" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.333598 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="283581c7-0894-47b6-b933-f36c54f50e4b" containerName="barbican-api" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.333608 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f15c5e19-1645-4791-8981-2216c5be654b" containerName="neutron-api" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.333619 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4770fea7-6a1a-44e8-bebe-09220dbc4c71" containerName="sg-core" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.333628 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4770fea7-6a1a-44e8-bebe-09220dbc4c71" containerName="ceilometer-central-agent" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.333642 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4770fea7-6a1a-44e8-bebe-09220dbc4c71" containerName="ceilometer-notification-agent" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.333651 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="283581c7-0894-47b6-b933-f36c54f50e4b" containerName="barbican-api-log" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.333661 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbda3331-08bc-49a1-8cf2-f24700bf4a89" containerName="cloudkitty-storageinit" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.333674 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f15c5e19-1645-4791-8981-2216c5be654b" containerName="neutron-httpd" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.335659 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.340057 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.340407 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.341439 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.342705 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.346474 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-jbnsw" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.346631 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.346747 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.346853 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.347046 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.359306 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " pod="openstack/ceilometer-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.359347 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " pod="openstack/ceilometer-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.359405 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-scripts\") pod \"ceilometer-0\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " pod="openstack/ceilometer-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.359424 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-config-data\") pod \"ceilometer-0\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " pod="openstack/ceilometer-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.359446 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-run-httpd\") pod \"ceilometer-0\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " pod="openstack/ceilometer-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.359514 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-log-httpd\") pod \"ceilometer-0\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " pod="openstack/ceilometer-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.359534 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjt4f\" (UniqueName: \"kubernetes.io/projected/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-kube-api-access-zjt4f\") pod \"ceilometer-0\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " pod="openstack/ceilometer-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.363575 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.375228 4858 scope.go:117] "RemoveContainer" containerID="4e882c9e97e0b550feb3ab78f0cdf7b99e372d900759076f0c2c8dd00c543301" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.382560 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.393089 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-kp8tk"] Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.393372 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" podUID="4a241bab-d126-4228-8265-fba10001ce81" containerName="dnsmasq-dns" containerID="cri-o://2f315bcf65f4c1d5155fb5b77f6c37a3b9986fab450e88f0b957015ce5b9c8dd" gracePeriod=10 Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.405274 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-mb9mk"] Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.408965 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.441750 4858 scope.go:117] "RemoveContainer" containerID="1e29ff7118bc4c5d05e46e07bdeddc3ef4d7f1fd2a6e78502f9581f673f7d528" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.459052 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="283581c7-0894-47b6-b933-f36c54f50e4b" path="/var/lib/kubelet/pods/283581c7-0894-47b6-b933-f36c54f50e4b/volumes" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.461130 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-config-data\") pod \"cloudkitty-proc-0\" (UID: \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.461228 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4770fea7-6a1a-44e8-bebe-09220dbc4c71" path="/var/lib/kubelet/pods/4770fea7-6a1a-44e8-bebe-09220dbc4c71/volumes" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.461296 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-log-httpd\") pod \"ceilometer-0\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " pod="openstack/ceilometer-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.461320 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjt4f\" (UniqueName: \"kubernetes.io/projected/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-kube-api-access-zjt4f\") pod \"ceilometer-0\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " pod="openstack/ceilometer-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.461603 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.465418 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-certs\") pod \"cloudkitty-proc-0\" (UID: \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.465479 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.465540 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " pod="openstack/ceilometer-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.465570 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " pod="openstack/ceilometer-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.465653 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqphl\" (UniqueName: \"kubernetes.io/projected/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-kube-api-access-cqphl\") pod \"cloudkitty-proc-0\" (UID: \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.465698 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-scripts\") pod \"ceilometer-0\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " pod="openstack/ceilometer-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.465723 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-config-data\") pod \"ceilometer-0\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " pod="openstack/ceilometer-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.465750 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-run-httpd\") pod \"ceilometer-0\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " pod="openstack/ceilometer-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.465796 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-scripts\") pod \"cloudkitty-proc-0\" (UID: \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.466683 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-log-httpd\") pod \"ceilometer-0\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " pod="openstack/ceilometer-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.468332 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-run-httpd\") pod \"ceilometer-0\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " pod="openstack/ceilometer-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.468968 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-mb9mk"] Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.470848 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-scripts\") pod \"ceilometer-0\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " pod="openstack/ceilometer-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.471438 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " pod="openstack/ceilometer-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.471830 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " pod="openstack/ceilometer-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.472581 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-config-data\") pod \"ceilometer-0\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " pod="openstack/ceilometer-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.493422 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjt4f\" (UniqueName: \"kubernetes.io/projected/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-kube-api-access-zjt4f\") pod \"ceilometer-0\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " pod="openstack/ceilometer-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.527202 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.528712 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.531759 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.537985 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.566755 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-certs\") pod \"cloudkitty-proc-0\" (UID: \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.566796 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-config\") pod \"dnsmasq-dns-67bdc55879-mb9mk\" (UID: \"274393d7-4826-441f-b03e-496f8b30d14f\") " pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.566813 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.566859 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-mb9mk\" (UID: \"274393d7-4826-441f-b03e-496f8b30d14f\") " pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.566885 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqphl\" (UniqueName: \"kubernetes.io/projected/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-kube-api-access-cqphl\") pod \"cloudkitty-proc-0\" (UID: \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.566902 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-mb9mk\" (UID: \"274393d7-4826-441f-b03e-496f8b30d14f\") " pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.566933 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-mb9mk\" (UID: \"274393d7-4826-441f-b03e-496f8b30d14f\") " pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.566961 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-scripts\") pod \"cloudkitty-proc-0\" (UID: \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.566988 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-dns-svc\") pod \"dnsmasq-dns-67bdc55879-mb9mk\" (UID: \"274393d7-4826-441f-b03e-496f8b30d14f\") " pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.567007 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-config-data\") pod \"cloudkitty-proc-0\" (UID: \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.567057 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5qnw\" (UniqueName: \"kubernetes.io/projected/274393d7-4826-441f-b03e-496f8b30d14f-kube-api-access-k5qnw\") pod \"dnsmasq-dns-67bdc55879-mb9mk\" (UID: \"274393d7-4826-441f-b03e-496f8b30d14f\") " pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.567078 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.571594 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-certs\") pod \"cloudkitty-proc-0\" (UID: \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.571854 4858 scope.go:117] "RemoveContainer" containerID="4e882c9e97e0b550feb3ab78f0cdf7b99e372d900759076f0c2c8dd00c543301" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.572433 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:53:59 crc kubenswrapper[4858]: E0218 00:53:59.573863 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e882c9e97e0b550feb3ab78f0cdf7b99e372d900759076f0c2c8dd00c543301\": container with ID starting with 4e882c9e97e0b550feb3ab78f0cdf7b99e372d900759076f0c2c8dd00c543301 not found: ID does not exist" containerID="4e882c9e97e0b550feb3ab78f0cdf7b99e372d900759076f0c2c8dd00c543301" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.573922 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e882c9e97e0b550feb3ab78f0cdf7b99e372d900759076f0c2c8dd00c543301"} err="failed to get container status \"4e882c9e97e0b550feb3ab78f0cdf7b99e372d900759076f0c2c8dd00c543301\": rpc error: code = NotFound desc = could not find container \"4e882c9e97e0b550feb3ab78f0cdf7b99e372d900759076f0c2c8dd00c543301\": container with ID starting with 4e882c9e97e0b550feb3ab78f0cdf7b99e372d900759076f0c2c8dd00c543301 not found: ID does not exist" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.573946 4858 scope.go:117] "RemoveContainer" containerID="1e29ff7118bc4c5d05e46e07bdeddc3ef4d7f1fd2a6e78502f9581f673f7d528" Feb 18 00:53:59 crc kubenswrapper[4858]: E0218 00:53:59.574219 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e29ff7118bc4c5d05e46e07bdeddc3ef4d7f1fd2a6e78502f9581f673f7d528\": container with ID starting with 1e29ff7118bc4c5d05e46e07bdeddc3ef4d7f1fd2a6e78502f9581f673f7d528 not found: ID does not exist" containerID="1e29ff7118bc4c5d05e46e07bdeddc3ef4d7f1fd2a6e78502f9581f673f7d528" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.574277 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e29ff7118bc4c5d05e46e07bdeddc3ef4d7f1fd2a6e78502f9581f673f7d528"} err="failed to get container status \"1e29ff7118bc4c5d05e46e07bdeddc3ef4d7f1fd2a6e78502f9581f673f7d528\": rpc error: code = NotFound desc = could not find container \"1e29ff7118bc4c5d05e46e07bdeddc3ef4d7f1fd2a6e78502f9581f673f7d528\": container with ID starting with 1e29ff7118bc4c5d05e46e07bdeddc3ef4d7f1fd2a6e78502f9581f673f7d528 not found: ID does not exist" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.578402 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-config-data\") pod \"cloudkitty-proc-0\" (UID: \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.590756 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.605760 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqphl\" (UniqueName: \"kubernetes.io/projected/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-kube-api-access-cqphl\") pod \"cloudkitty-proc-0\" (UID: \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.606890 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-scripts\") pod \"cloudkitty-proc-0\" (UID: \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.677628 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-certs\") pod \"cloudkitty-api-0\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " pod="openstack/cloudkitty-api-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.677708 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-logs\") pod \"cloudkitty-api-0\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " pod="openstack/cloudkitty-api-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.677743 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " pod="openstack/cloudkitty-api-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.677777 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5qnw\" (UniqueName: \"kubernetes.io/projected/274393d7-4826-441f-b03e-496f8b30d14f-kube-api-access-k5qnw\") pod \"dnsmasq-dns-67bdc55879-mb9mk\" (UID: \"274393d7-4826-441f-b03e-496f8b30d14f\") " pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.677817 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-scripts\") pod \"cloudkitty-api-0\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " pod="openstack/cloudkitty-api-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.677840 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-config\") pod \"dnsmasq-dns-67bdc55879-mb9mk\" (UID: \"274393d7-4826-441f-b03e-496f8b30d14f\") " pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.677865 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " pod="openstack/cloudkitty-api-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.677881 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-config-data\") pod \"cloudkitty-api-0\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " pod="openstack/cloudkitty-api-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.677929 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72mf6\" (UniqueName: \"kubernetes.io/projected/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-kube-api-access-72mf6\") pod \"cloudkitty-api-0\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " pod="openstack/cloudkitty-api-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.677950 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-mb9mk\" (UID: \"274393d7-4826-441f-b03e-496f8b30d14f\") " pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.678000 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-mb9mk\" (UID: \"274393d7-4826-441f-b03e-496f8b30d14f\") " pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.678055 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-mb9mk\" (UID: \"274393d7-4826-441f-b03e-496f8b30d14f\") " pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.678119 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-dns-svc\") pod \"dnsmasq-dns-67bdc55879-mb9mk\" (UID: \"274393d7-4826-441f-b03e-496f8b30d14f\") " pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.678984 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-dns-svc\") pod \"dnsmasq-dns-67bdc55879-mb9mk\" (UID: \"274393d7-4826-441f-b03e-496f8b30d14f\") " pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.681581 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-mb9mk\" (UID: \"274393d7-4826-441f-b03e-496f8b30d14f\") " pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.682249 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-mb9mk\" (UID: \"274393d7-4826-441f-b03e-496f8b30d14f\") " pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.682599 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-mb9mk\" (UID: \"274393d7-4826-441f-b03e-496f8b30d14f\") " pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.686335 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.690975 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-config\") pod \"dnsmasq-dns-67bdc55879-mb9mk\" (UID: \"274393d7-4826-441f-b03e-496f8b30d14f\") " pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.701627 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5qnw\" (UniqueName: \"kubernetes.io/projected/274393d7-4826-441f-b03e-496f8b30d14f-kube-api-access-k5qnw\") pod \"dnsmasq-dns-67bdc55879-mb9mk\" (UID: \"274393d7-4826-441f-b03e-496f8b30d14f\") " pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.710639 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.779686 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " pod="openstack/cloudkitty-api-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.779730 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-config-data\") pod \"cloudkitty-api-0\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " pod="openstack/cloudkitty-api-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.779787 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72mf6\" (UniqueName: \"kubernetes.io/projected/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-kube-api-access-72mf6\") pod \"cloudkitty-api-0\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " pod="openstack/cloudkitty-api-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.779897 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-certs\") pod \"cloudkitty-api-0\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " pod="openstack/cloudkitty-api-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.779938 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-logs\") pod \"cloudkitty-api-0\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " pod="openstack/cloudkitty-api-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.779968 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " pod="openstack/cloudkitty-api-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.780014 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-scripts\") pod \"cloudkitty-api-0\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " pod="openstack/cloudkitty-api-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.780540 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-logs\") pod \"cloudkitty-api-0\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " pod="openstack/cloudkitty-api-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.784137 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-certs\") pod \"cloudkitty-api-0\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " pod="openstack/cloudkitty-api-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.784145 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " pod="openstack/cloudkitty-api-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.788044 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-scripts\") pod \"cloudkitty-api-0\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " pod="openstack/cloudkitty-api-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.794103 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " pod="openstack/cloudkitty-api-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.794791 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-config-data\") pod \"cloudkitty-api-0\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " pod="openstack/cloudkitty-api-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.795849 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72mf6\" (UniqueName: \"kubernetes.io/projected/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-kube-api-access-72mf6\") pod \"cloudkitty-api-0\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " pod="openstack/cloudkitty-api-0" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.859062 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" Feb 18 00:53:59 crc kubenswrapper[4858]: I0218 00:53:59.875055 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.131332 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.171959 4858 generic.go:334] "Generic (PLEG): container finished" podID="4a241bab-d126-4228-8265-fba10001ce81" containerID="2f315bcf65f4c1d5155fb5b77f6c37a3b9986fab450e88f0b957015ce5b9c8dd" exitCode=0 Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.172014 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" event={"ID":"4a241bab-d126-4228-8265-fba10001ce81","Type":"ContainerDied","Data":"2f315bcf65f4c1d5155fb5b77f6c37a3b9986fab450e88f0b957015ce5b9c8dd"} Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.172040 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" event={"ID":"4a241bab-d126-4228-8265-fba10001ce81","Type":"ContainerDied","Data":"9e9a69d422f99c1126c2e90aab3b66a34d1580350cdc75991af378a8a307a7c9"} Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.172056 4858 scope.go:117] "RemoveContainer" containerID="2f315bcf65f4c1d5155fb5b77f6c37a3b9986fab450e88f0b957015ce5b9c8dd" Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.172152 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-kp8tk" Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.231804 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.231967 4858 scope.go:117] "RemoveContainer" containerID="53d90ef497488b6955e45ee595828485ea702336efdcd93e5079610927246f91" Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.265931 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.272719 4858 scope.go:117] "RemoveContainer" containerID="2f315bcf65f4c1d5155fb5b77f6c37a3b9986fab450e88f0b957015ce5b9c8dd" Feb 18 00:54:00 crc kubenswrapper[4858]: E0218 00:54:00.282324 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f315bcf65f4c1d5155fb5b77f6c37a3b9986fab450e88f0b957015ce5b9c8dd\": container with ID starting with 2f315bcf65f4c1d5155fb5b77f6c37a3b9986fab450e88f0b957015ce5b9c8dd not found: ID does not exist" containerID="2f315bcf65f4c1d5155fb5b77f6c37a3b9986fab450e88f0b957015ce5b9c8dd" Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.282405 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f315bcf65f4c1d5155fb5b77f6c37a3b9986fab450e88f0b957015ce5b9c8dd"} err="failed to get container status \"2f315bcf65f4c1d5155fb5b77f6c37a3b9986fab450e88f0b957015ce5b9c8dd\": rpc error: code = NotFound desc = could not find container \"2f315bcf65f4c1d5155fb5b77f6c37a3b9986fab450e88f0b957015ce5b9c8dd\": container with ID starting with 2f315bcf65f4c1d5155fb5b77f6c37a3b9986fab450e88f0b957015ce5b9c8dd not found: ID does not exist" Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.282457 4858 scope.go:117] "RemoveContainer" containerID="53d90ef497488b6955e45ee595828485ea702336efdcd93e5079610927246f91" Feb 18 00:54:00 crc kubenswrapper[4858]: W0218 00:54:00.285558 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20c1fed5_9e72_4c0d_8bf3_e664aee2516b.slice/crio-7be519fd096586cc8d01c83014b5ed553acd5ce1907fc2170d966080e3f9daa1 WatchSource:0}: Error finding container 7be519fd096586cc8d01c83014b5ed553acd5ce1907fc2170d966080e3f9daa1: Status 404 returned error can't find the container with id 7be519fd096586cc8d01c83014b5ed553acd5ce1907fc2170d966080e3f9daa1 Feb 18 00:54:00 crc kubenswrapper[4858]: E0218 00:54:00.286999 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53d90ef497488b6955e45ee595828485ea702336efdcd93e5079610927246f91\": container with ID starting with 53d90ef497488b6955e45ee595828485ea702336efdcd93e5079610927246f91 not found: ID does not exist" containerID="53d90ef497488b6955e45ee595828485ea702336efdcd93e5079610927246f91" Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.287023 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53d90ef497488b6955e45ee595828485ea702336efdcd93e5079610927246f91"} err="failed to get container status \"53d90ef497488b6955e45ee595828485ea702336efdcd93e5079610927246f91\": rpc error: code = NotFound desc = could not find container \"53d90ef497488b6955e45ee595828485ea702336efdcd93e5079610927246f91\": container with ID starting with 53d90ef497488b6955e45ee595828485ea702336efdcd93e5079610927246f91 not found: ID does not exist" Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.294232 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-dns-swift-storage-0\") pod \"4a241bab-d126-4228-8265-fba10001ce81\" (UID: \"4a241bab-d126-4228-8265-fba10001ce81\") " Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.294299 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-config\") pod \"4a241bab-d126-4228-8265-fba10001ce81\" (UID: \"4a241bab-d126-4228-8265-fba10001ce81\") " Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.294337 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-ovsdbserver-nb\") pod \"4a241bab-d126-4228-8265-fba10001ce81\" (UID: \"4a241bab-d126-4228-8265-fba10001ce81\") " Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.294380 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckmng\" (UniqueName: \"kubernetes.io/projected/4a241bab-d126-4228-8265-fba10001ce81-kube-api-access-ckmng\") pod \"4a241bab-d126-4228-8265-fba10001ce81\" (UID: \"4a241bab-d126-4228-8265-fba10001ce81\") " Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.294398 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-ovsdbserver-sb\") pod \"4a241bab-d126-4228-8265-fba10001ce81\" (UID: \"4a241bab-d126-4228-8265-fba10001ce81\") " Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.294440 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-dns-svc\") pod \"4a241bab-d126-4228-8265-fba10001ce81\" (UID: \"4a241bab-d126-4228-8265-fba10001ce81\") " Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.356862 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a241bab-d126-4228-8265-fba10001ce81-kube-api-access-ckmng" (OuterVolumeSpecName: "kube-api-access-ckmng") pod "4a241bab-d126-4228-8265-fba10001ce81" (UID: "4a241bab-d126-4228-8265-fba10001ce81"). InnerVolumeSpecName "kube-api-access-ckmng". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.404241 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckmng\" (UniqueName: \"kubernetes.io/projected/4a241bab-d126-4228-8265-fba10001ce81-kube-api-access-ckmng\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.410144 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4a241bab-d126-4228-8265-fba10001ce81" (UID: "4a241bab-d126-4228-8265-fba10001ce81"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.414805 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4a241bab-d126-4228-8265-fba10001ce81" (UID: "4a241bab-d126-4228-8265-fba10001ce81"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.417869 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-config" (OuterVolumeSpecName: "config") pod "4a241bab-d126-4228-8265-fba10001ce81" (UID: "4a241bab-d126-4228-8265-fba10001ce81"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.429013 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4a241bab-d126-4228-8265-fba10001ce81" (UID: "4a241bab-d126-4228-8265-fba10001ce81"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.437012 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4a241bab-d126-4228-8265-fba10001ce81" (UID: "4a241bab-d126-4228-8265-fba10001ce81"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.512594 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.512622 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.512633 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.512642 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.512651 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a241bab-d126-4228-8265-fba10001ce81-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.517408 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-kp8tk"] Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.528167 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-kp8tk"] Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.579600 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 18 00:54:00 crc kubenswrapper[4858]: W0218 00:54:00.587508 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1de0a7c1_4b7f_4e00_a64a_bc9582ecb098.slice/crio-c608b4197f2af7e56c14f9f9c8a52aa24f48e8189c50838ea61a32ed1e13b4ba WatchSource:0}: Error finding container c608b4197f2af7e56c14f9f9c8a52aa24f48e8189c50838ea61a32ed1e13b4ba: Status 404 returned error can't find the container with id c608b4197f2af7e56c14f9f9c8a52aa24f48e8189c50838ea61a32ed1e13b4ba Feb 18 00:54:00 crc kubenswrapper[4858]: I0218 00:54:00.657991 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-mb9mk"] Feb 18 00:54:01 crc kubenswrapper[4858]: I0218 00:54:01.205208 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098","Type":"ContainerStarted","Data":"602baedd9f61f42ec39c1dffabd51471d9e2fdc173e8713c080c9b6e806a422c"} Feb 18 00:54:01 crc kubenswrapper[4858]: I0218 00:54:01.205706 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098","Type":"ContainerStarted","Data":"6b4e15ca4078ecdb46177e9c65e759298b6767770cbd71118541567bb46a6add"} Feb 18 00:54:01 crc kubenswrapper[4858]: I0218 00:54:01.205718 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098","Type":"ContainerStarted","Data":"c608b4197f2af7e56c14f9f9c8a52aa24f48e8189c50838ea61a32ed1e13b4ba"} Feb 18 00:54:01 crc kubenswrapper[4858]: I0218 00:54:01.206852 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Feb 18 00:54:01 crc kubenswrapper[4858]: I0218 00:54:01.217118 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e","Type":"ContainerStarted","Data":"3d205a2f5802366d09be80f4b32d5aa0033d38e8577d85d14c4f6982c2e71034"} Feb 18 00:54:01 crc kubenswrapper[4858]: I0218 00:54:01.251435 4858 generic.go:334] "Generic (PLEG): container finished" podID="274393d7-4826-441f-b03e-496f8b30d14f" containerID="fb85683809997cc252d73dcfba44cda7d39488af4a553a511b5193b263ecb42f" exitCode=0 Feb 18 00:54:01 crc kubenswrapper[4858]: I0218 00:54:01.251527 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" event={"ID":"274393d7-4826-441f-b03e-496f8b30d14f","Type":"ContainerDied","Data":"fb85683809997cc252d73dcfba44cda7d39488af4a553a511b5193b263ecb42f"} Feb 18 00:54:01 crc kubenswrapper[4858]: I0218 00:54:01.251552 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" event={"ID":"274393d7-4826-441f-b03e-496f8b30d14f","Type":"ContainerStarted","Data":"845c8b77f357f3da39ea9b97bf6f6b03262fd80bf6de7a1ea273df631887be5a"} Feb 18 00:54:01 crc kubenswrapper[4858]: I0218 00:54:01.261655 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20c1fed5-9e72-4c0d-8bf3-e664aee2516b","Type":"ContainerStarted","Data":"7be519fd096586cc8d01c83014b5ed553acd5ce1907fc2170d966080e3f9daa1"} Feb 18 00:54:01 crc kubenswrapper[4858]: I0218 00:54:01.264240 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=2.264219875 podStartE2EDuration="2.264219875s" podCreationTimestamp="2026-02-18 00:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:54:01.24005751 +0000 UTC m=+1194.545894242" watchObservedRunningTime="2026-02-18 00:54:01.264219875 +0000 UTC m=+1194.570056607" Feb 18 00:54:01 crc kubenswrapper[4858]: I0218 00:54:01.431784 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a241bab-d126-4228-8265-fba10001ce81" path="/var/lib/kubelet/pods/4a241bab-d126-4228-8265-fba10001ce81/volumes" Feb 18 00:54:02 crc kubenswrapper[4858]: I0218 00:54:02.271102 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e","Type":"ContainerStarted","Data":"71132ec74d595e9c7937d55c7dcf34431c4f64adeed0e8a277fd63e4fd485089"} Feb 18 00:54:02 crc kubenswrapper[4858]: I0218 00:54:02.272745 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" event={"ID":"274393d7-4826-441f-b03e-496f8b30d14f","Type":"ContainerStarted","Data":"d58a3ee463deacbe350aade08b64d36ca9b81bacf2992b00b8188d8ccd14ae40"} Feb 18 00:54:02 crc kubenswrapper[4858]: I0218 00:54:02.273098 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" Feb 18 00:54:02 crc kubenswrapper[4858]: I0218 00:54:02.275323 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20c1fed5-9e72-4c0d-8bf3-e664aee2516b","Type":"ContainerStarted","Data":"64f63d278ba85e7f0144883748bd873af083da72c36460255b86e1249e068ded"} Feb 18 00:54:02 crc kubenswrapper[4858]: I0218 00:54:02.295340 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=1.920110368 podStartE2EDuration="3.295290544s" podCreationTimestamp="2026-02-18 00:53:59 +0000 UTC" firstStartedPulling="2026-02-18 00:54:00.272855098 +0000 UTC m=+1193.578691830" lastFinishedPulling="2026-02-18 00:54:01.648035274 +0000 UTC m=+1194.953872006" observedRunningTime="2026-02-18 00:54:02.286926411 +0000 UTC m=+1195.592763143" watchObservedRunningTime="2026-02-18 00:54:02.295290544 +0000 UTC m=+1195.601127276" Feb 18 00:54:02 crc kubenswrapper[4858]: I0218 00:54:02.347599 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 18 00:54:02 crc kubenswrapper[4858]: I0218 00:54:02.366166 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 18 00:54:02 crc kubenswrapper[4858]: I0218 00:54:02.373331 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" podStartSLOduration=3.373306534 podStartE2EDuration="3.373306534s" podCreationTimestamp="2026-02-18 00:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:54:02.342193741 +0000 UTC m=+1195.648030493" watchObservedRunningTime="2026-02-18 00:54:02.373306534 +0000 UTC m=+1195.679143316" Feb 18 00:54:02 crc kubenswrapper[4858]: I0218 00:54:02.735567 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 00:54:02 crc kubenswrapper[4858]: I0218 00:54:02.798349 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 00:54:03 crc kubenswrapper[4858]: I0218 00:54:03.284157 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20c1fed5-9e72-4c0d-8bf3-e664aee2516b","Type":"ContainerStarted","Data":"6e0a466378eeae49668cc5eec4f3ccb009683e62b40bcae1e70b2c8ee9b4f062"} Feb 18 00:54:03 crc kubenswrapper[4858]: I0218 00:54:03.285514 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20c1fed5-9e72-4c0d-8bf3-e664aee2516b","Type":"ContainerStarted","Data":"71a7331a81a352e14cd4e8919ff0b00ce81ddf7283b980a14490e23d4537ad08"} Feb 18 00:54:03 crc kubenswrapper[4858]: I0218 00:54:03.284845 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="fceda99c-5b24-470d-a686-fea2bb92d258" containerName="probe" containerID="cri-o://3faf8f606e7c009e3df4f356fbb639e7d21dd238168286d18c84f196d211fa84" gracePeriod=30 Feb 18 00:54:03 crc kubenswrapper[4858]: I0218 00:54:03.284643 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="fceda99c-5b24-470d-a686-fea2bb92d258" containerName="cinder-scheduler" containerID="cri-o://1503d174cb41d7833714ca4adc7b94dc210f047cc4ffdc62c2d191be079f3ca5" gracePeriod=30 Feb 18 00:54:04 crc kubenswrapper[4858]: I0218 00:54:04.384563 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e" containerName="cloudkitty-proc" containerID="cri-o://71132ec74d595e9c7937d55c7dcf34431c4f64adeed0e8a277fd63e4fd485089" gracePeriod=30 Feb 18 00:54:04 crc kubenswrapper[4858]: I0218 00:54:04.384963 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="1de0a7c1-4b7f-4e00-a64a-bc9582ecb098" containerName="cloudkitty-api-log" containerID="cri-o://6b4e15ca4078ecdb46177e9c65e759298b6767770cbd71118541567bb46a6add" gracePeriod=30 Feb 18 00:54:04 crc kubenswrapper[4858]: I0218 00:54:04.385379 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="1de0a7c1-4b7f-4e00-a64a-bc9582ecb098" containerName="cloudkitty-api" containerID="cri-o://602baedd9f61f42ec39c1dffabd51471d9e2fdc173e8713c080c9b6e806a422c" gracePeriod=30 Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.278064 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.397025 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20c1fed5-9e72-4c0d-8bf3-e664aee2516b","Type":"ContainerStarted","Data":"4e2066a343c11600c1d4acbe3deef0499800cb8653fd90413858b80c55e96e62"} Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.397348 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.397743 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-logs\") pod \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.397779 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-config-data\") pod \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.397804 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-config-data-custom\") pod \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.397908 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-scripts\") pod \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.397980 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-certs\") pod \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.398117 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-logs" (OuterVolumeSpecName: "logs") pod "1de0a7c1-4b7f-4e00-a64a-bc9582ecb098" (UID: "1de0a7c1-4b7f-4e00-a64a-bc9582ecb098"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.398171 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72mf6\" (UniqueName: \"kubernetes.io/projected/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-kube-api-access-72mf6\") pod \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.398259 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-combined-ca-bundle\") pod \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\" (UID: \"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098\") " Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.398835 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.403355 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-scripts" (OuterVolumeSpecName: "scripts") pod "1de0a7c1-4b7f-4e00-a64a-bc9582ecb098" (UID: "1de0a7c1-4b7f-4e00-a64a-bc9582ecb098"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.403872 4858 generic.go:334] "Generic (PLEG): container finished" podID="fceda99c-5b24-470d-a686-fea2bb92d258" containerID="3faf8f606e7c009e3df4f356fbb639e7d21dd238168286d18c84f196d211fa84" exitCode=0 Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.403942 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fceda99c-5b24-470d-a686-fea2bb92d258","Type":"ContainerDied","Data":"3faf8f606e7c009e3df4f356fbb639e7d21dd238168286d18c84f196d211fa84"} Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.405691 4858 generic.go:334] "Generic (PLEG): container finished" podID="1de0a7c1-4b7f-4e00-a64a-bc9582ecb098" containerID="602baedd9f61f42ec39c1dffabd51471d9e2fdc173e8713c080c9b6e806a422c" exitCode=0 Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.405708 4858 generic.go:334] "Generic (PLEG): container finished" podID="1de0a7c1-4b7f-4e00-a64a-bc9582ecb098" containerID="6b4e15ca4078ecdb46177e9c65e759298b6767770cbd71118541567bb46a6add" exitCode=143 Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.405722 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098","Type":"ContainerDied","Data":"602baedd9f61f42ec39c1dffabd51471d9e2fdc173e8713c080c9b6e806a422c"} Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.405737 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098","Type":"ContainerDied","Data":"6b4e15ca4078ecdb46177e9c65e759298b6767770cbd71118541567bb46a6add"} Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.405747 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"1de0a7c1-4b7f-4e00-a64a-bc9582ecb098","Type":"ContainerDied","Data":"c608b4197f2af7e56c14f9f9c8a52aa24f48e8189c50838ea61a32ed1e13b4ba"} Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.405762 4858 scope.go:117] "RemoveContainer" containerID="602baedd9f61f42ec39c1dffabd51471d9e2fdc173e8713c080c9b6e806a422c" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.405866 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.406652 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1de0a7c1-4b7f-4e00-a64a-bc9582ecb098" (UID: "1de0a7c1-4b7f-4e00-a64a-bc9582ecb098"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.409622 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-kube-api-access-72mf6" (OuterVolumeSpecName: "kube-api-access-72mf6") pod "1de0a7c1-4b7f-4e00-a64a-bc9582ecb098" (UID: "1de0a7c1-4b7f-4e00-a64a-bc9582ecb098"). InnerVolumeSpecName "kube-api-access-72mf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.411565 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-certs" (OuterVolumeSpecName: "certs") pod "1de0a7c1-4b7f-4e00-a64a-bc9582ecb098" (UID: "1de0a7c1-4b7f-4e00-a64a-bc9582ecb098"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.427540 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.322357024 podStartE2EDuration="6.427525377s" podCreationTimestamp="2026-02-18 00:53:59 +0000 UTC" firstStartedPulling="2026-02-18 00:54:00.302857855 +0000 UTC m=+1193.608694587" lastFinishedPulling="2026-02-18 00:54:04.408026208 +0000 UTC m=+1197.713862940" observedRunningTime="2026-02-18 00:54:05.426971954 +0000 UTC m=+1198.732808686" watchObservedRunningTime="2026-02-18 00:54:05.427525377 +0000 UTC m=+1198.733362109" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.434895 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-config-data" (OuterVolumeSpecName: "config-data") pod "1de0a7c1-4b7f-4e00-a64a-bc9582ecb098" (UID: "1de0a7c1-4b7f-4e00-a64a-bc9582ecb098"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.435864 4858 scope.go:117] "RemoveContainer" containerID="6b4e15ca4078ecdb46177e9c65e759298b6767770cbd71118541567bb46a6add" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.455649 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1de0a7c1-4b7f-4e00-a64a-bc9582ecb098" (UID: "1de0a7c1-4b7f-4e00-a64a-bc9582ecb098"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.502003 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72mf6\" (UniqueName: \"kubernetes.io/projected/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-kube-api-access-72mf6\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.502047 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.502057 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.502066 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.502075 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.502084 4858 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.533319 4858 scope.go:117] "RemoveContainer" containerID="602baedd9f61f42ec39c1dffabd51471d9e2fdc173e8713c080c9b6e806a422c" Feb 18 00:54:05 crc kubenswrapper[4858]: E0218 00:54:05.535707 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"602baedd9f61f42ec39c1dffabd51471d9e2fdc173e8713c080c9b6e806a422c\": container with ID starting with 602baedd9f61f42ec39c1dffabd51471d9e2fdc173e8713c080c9b6e806a422c not found: ID does not exist" containerID="602baedd9f61f42ec39c1dffabd51471d9e2fdc173e8713c080c9b6e806a422c" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.535747 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"602baedd9f61f42ec39c1dffabd51471d9e2fdc173e8713c080c9b6e806a422c"} err="failed to get container status \"602baedd9f61f42ec39c1dffabd51471d9e2fdc173e8713c080c9b6e806a422c\": rpc error: code = NotFound desc = could not find container \"602baedd9f61f42ec39c1dffabd51471d9e2fdc173e8713c080c9b6e806a422c\": container with ID starting with 602baedd9f61f42ec39c1dffabd51471d9e2fdc173e8713c080c9b6e806a422c not found: ID does not exist" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.535775 4858 scope.go:117] "RemoveContainer" containerID="6b4e15ca4078ecdb46177e9c65e759298b6767770cbd71118541567bb46a6add" Feb 18 00:54:05 crc kubenswrapper[4858]: E0218 00:54:05.536053 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b4e15ca4078ecdb46177e9c65e759298b6767770cbd71118541567bb46a6add\": container with ID starting with 6b4e15ca4078ecdb46177e9c65e759298b6767770cbd71118541567bb46a6add not found: ID does not exist" containerID="6b4e15ca4078ecdb46177e9c65e759298b6767770cbd71118541567bb46a6add" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.536078 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b4e15ca4078ecdb46177e9c65e759298b6767770cbd71118541567bb46a6add"} err="failed to get container status \"6b4e15ca4078ecdb46177e9c65e759298b6767770cbd71118541567bb46a6add\": rpc error: code = NotFound desc = could not find container \"6b4e15ca4078ecdb46177e9c65e759298b6767770cbd71118541567bb46a6add\": container with ID starting with 6b4e15ca4078ecdb46177e9c65e759298b6767770cbd71118541567bb46a6add not found: ID does not exist" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.536091 4858 scope.go:117] "RemoveContainer" containerID="602baedd9f61f42ec39c1dffabd51471d9e2fdc173e8713c080c9b6e806a422c" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.536261 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"602baedd9f61f42ec39c1dffabd51471d9e2fdc173e8713c080c9b6e806a422c"} err="failed to get container status \"602baedd9f61f42ec39c1dffabd51471d9e2fdc173e8713c080c9b6e806a422c\": rpc error: code = NotFound desc = could not find container \"602baedd9f61f42ec39c1dffabd51471d9e2fdc173e8713c080c9b6e806a422c\": container with ID starting with 602baedd9f61f42ec39c1dffabd51471d9e2fdc173e8713c080c9b6e806a422c not found: ID does not exist" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.536280 4858 scope.go:117] "RemoveContainer" containerID="6b4e15ca4078ecdb46177e9c65e759298b6767770cbd71118541567bb46a6add" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.536789 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b4e15ca4078ecdb46177e9c65e759298b6767770cbd71118541567bb46a6add"} err="failed to get container status \"6b4e15ca4078ecdb46177e9c65e759298b6767770cbd71118541567bb46a6add\": rpc error: code = NotFound desc = could not find container \"6b4e15ca4078ecdb46177e9c65e759298b6767770cbd71118541567bb46a6add\": container with ID starting with 6b4e15ca4078ecdb46177e9c65e759298b6767770cbd71118541567bb46a6add not found: ID does not exist" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.760584 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.775533 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.803413 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Feb 18 00:54:05 crc kubenswrapper[4858]: E0218 00:54:05.804552 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a241bab-d126-4228-8265-fba10001ce81" containerName="dnsmasq-dns" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.804681 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a241bab-d126-4228-8265-fba10001ce81" containerName="dnsmasq-dns" Feb 18 00:54:05 crc kubenswrapper[4858]: E0218 00:54:05.804814 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a241bab-d126-4228-8265-fba10001ce81" containerName="init" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.804898 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a241bab-d126-4228-8265-fba10001ce81" containerName="init" Feb 18 00:54:05 crc kubenswrapper[4858]: E0218 00:54:05.804981 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de0a7c1-4b7f-4e00-a64a-bc9582ecb098" containerName="cloudkitty-api-log" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.805058 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de0a7c1-4b7f-4e00-a64a-bc9582ecb098" containerName="cloudkitty-api-log" Feb 18 00:54:05 crc kubenswrapper[4858]: E0218 00:54:05.805145 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de0a7c1-4b7f-4e00-a64a-bc9582ecb098" containerName="cloudkitty-api" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.805228 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de0a7c1-4b7f-4e00-a64a-bc9582ecb098" containerName="cloudkitty-api" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.805780 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a241bab-d126-4228-8265-fba10001ce81" containerName="dnsmasq-dns" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.806160 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de0a7c1-4b7f-4e00-a64a-bc9582ecb098" containerName="cloudkitty-api" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.806303 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de0a7c1-4b7f-4e00-a64a-bc9582ecb098" containerName="cloudkitty-api-log" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.809835 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.816259 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.819712 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.829288 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.854454 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.910015 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.910074 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5xpf\" (UniqueName: \"kubernetes.io/projected/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-kube-api-access-z5xpf\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.910116 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-certs\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.910146 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.910178 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.910221 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-logs\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.910303 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-config-data\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.910658 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:05 crc kubenswrapper[4858]: I0218 00:54:05.910689 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-scripts\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:06 crc kubenswrapper[4858]: I0218 00:54:06.012479 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:06 crc kubenswrapper[4858]: I0218 00:54:06.012573 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5xpf\" (UniqueName: \"kubernetes.io/projected/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-kube-api-access-z5xpf\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:06 crc kubenswrapper[4858]: I0218 00:54:06.012623 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-certs\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:06 crc kubenswrapper[4858]: I0218 00:54:06.012664 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:06 crc kubenswrapper[4858]: I0218 00:54:06.012705 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:06 crc kubenswrapper[4858]: I0218 00:54:06.012767 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-logs\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:06 crc kubenswrapper[4858]: I0218 00:54:06.012801 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-config-data\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:06 crc kubenswrapper[4858]: I0218 00:54:06.012902 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:06 crc kubenswrapper[4858]: I0218 00:54:06.012941 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-scripts\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:06 crc kubenswrapper[4858]: I0218 00:54:06.014563 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-logs\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:06 crc kubenswrapper[4858]: I0218 00:54:06.020193 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:06 crc kubenswrapper[4858]: I0218 00:54:06.020269 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:06 crc kubenswrapper[4858]: I0218 00:54:06.020680 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-certs\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:06 crc kubenswrapper[4858]: I0218 00:54:06.021265 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:06 crc kubenswrapper[4858]: I0218 00:54:06.022214 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-config-data\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:06 crc kubenswrapper[4858]: I0218 00:54:06.031487 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-scripts\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:06 crc kubenswrapper[4858]: I0218 00:54:06.032581 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5xpf\" (UniqueName: \"kubernetes.io/projected/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-kube-api-access-z5xpf\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:06 crc kubenswrapper[4858]: I0218 00:54:06.043356 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2\") " pod="openstack/cloudkitty-api-0" Feb 18 00:54:06 crc kubenswrapper[4858]: I0218 00:54:06.088931 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 18 00:54:06 crc kubenswrapper[4858]: I0218 00:54:06.134059 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 18 00:54:06 crc kubenswrapper[4858]: I0218 00:54:06.637511 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 18 00:54:06 crc kubenswrapper[4858]: W0218 00:54:06.643613 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod311f8faa_b6c8_4c0f_875a_cf09c1e9dbf2.slice/crio-d115610a6a3857dc6090f4438b8953c971cbc6407acc45b750ac4a5a23326dbe WatchSource:0}: Error finding container d115610a6a3857dc6090f4438b8953c971cbc6407acc45b750ac4a5a23326dbe: Status 404 returned error can't find the container with id d115610a6a3857dc6090f4438b8953c971cbc6407acc45b750ac4a5a23326dbe Feb 18 00:54:06 crc kubenswrapper[4858]: E0218 00:54:06.670667 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfceda99c_5b24_470d_a686_fea2bb92d258.slice/crio-conmon-1503d174cb41d7833714ca4adc7b94dc210f047cc4ffdc62c2d191be079f3ca5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfceda99c_5b24_470d_a686_fea2bb92d258.slice/crio-1503d174cb41d7833714ca4adc7b94dc210f047cc4ffdc62c2d191be079f3ca5.scope\": RecentStats: unable to find data in memory cache]" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.041255 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.182147 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fceda99c-5b24-470d-a686-fea2bb92d258-config-data-custom\") pod \"fceda99c-5b24-470d-a686-fea2bb92d258\" (UID: \"fceda99c-5b24-470d-a686-fea2bb92d258\") " Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.182201 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fceda99c-5b24-470d-a686-fea2bb92d258-etc-machine-id\") pod \"fceda99c-5b24-470d-a686-fea2bb92d258\" (UID: \"fceda99c-5b24-470d-a686-fea2bb92d258\") " Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.182339 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvzbc\" (UniqueName: \"kubernetes.io/projected/fceda99c-5b24-470d-a686-fea2bb92d258-kube-api-access-fvzbc\") pod \"fceda99c-5b24-470d-a686-fea2bb92d258\" (UID: \"fceda99c-5b24-470d-a686-fea2bb92d258\") " Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.182379 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fceda99c-5b24-470d-a686-fea2bb92d258-combined-ca-bundle\") pod \"fceda99c-5b24-470d-a686-fea2bb92d258\" (UID: \"fceda99c-5b24-470d-a686-fea2bb92d258\") " Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.182464 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fceda99c-5b24-470d-a686-fea2bb92d258-scripts\") pod \"fceda99c-5b24-470d-a686-fea2bb92d258\" (UID: \"fceda99c-5b24-470d-a686-fea2bb92d258\") " Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.182506 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fceda99c-5b24-470d-a686-fea2bb92d258-config-data\") pod \"fceda99c-5b24-470d-a686-fea2bb92d258\" (UID: \"fceda99c-5b24-470d-a686-fea2bb92d258\") " Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.188860 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fceda99c-5b24-470d-a686-fea2bb92d258-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fceda99c-5b24-470d-a686-fea2bb92d258" (UID: "fceda99c-5b24-470d-a686-fea2bb92d258"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.194899 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fceda99c-5b24-470d-a686-fea2bb92d258-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fceda99c-5b24-470d-a686-fea2bb92d258" (UID: "fceda99c-5b24-470d-a686-fea2bb92d258"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.209484 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fceda99c-5b24-470d-a686-fea2bb92d258-scripts" (OuterVolumeSpecName: "scripts") pod "fceda99c-5b24-470d-a686-fea2bb92d258" (UID: "fceda99c-5b24-470d-a686-fea2bb92d258"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.213795 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fceda99c-5b24-470d-a686-fea2bb92d258-kube-api-access-fvzbc" (OuterVolumeSpecName: "kube-api-access-fvzbc") pod "fceda99c-5b24-470d-a686-fea2bb92d258" (UID: "fceda99c-5b24-470d-a686-fea2bb92d258"). InnerVolumeSpecName "kube-api-access-fvzbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.284810 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fceda99c-5b24-470d-a686-fea2bb92d258-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.284838 4858 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fceda99c-5b24-470d-a686-fea2bb92d258-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.284849 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvzbc\" (UniqueName: \"kubernetes.io/projected/fceda99c-5b24-470d-a686-fea2bb92d258-kube-api-access-fvzbc\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.284857 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fceda99c-5b24-470d-a686-fea2bb92d258-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.300708 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fceda99c-5b24-470d-a686-fea2bb92d258-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fceda99c-5b24-470d-a686-fea2bb92d258" (UID: "fceda99c-5b24-470d-a686-fea2bb92d258"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.370707 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fceda99c-5b24-470d-a686-fea2bb92d258-config-data" (OuterVolumeSpecName: "config-data") pod "fceda99c-5b24-470d-a686-fea2bb92d258" (UID: "fceda99c-5b24-470d-a686-fea2bb92d258"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.386073 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fceda99c-5b24-470d-a686-fea2bb92d258-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.386101 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fceda99c-5b24-470d-a686-fea2bb92d258-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.433617 4858 generic.go:334] "Generic (PLEG): container finished" podID="fceda99c-5b24-470d-a686-fea2bb92d258" containerID="1503d174cb41d7833714ca4adc7b94dc210f047cc4ffdc62c2d191be079f3ca5" exitCode=0 Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.433817 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.434315 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1de0a7c1-4b7f-4e00-a64a-bc9582ecb098" path="/var/lib/kubelet/pods/1de0a7c1-4b7f-4e00-a64a-bc9582ecb098/volumes" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.435236 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fceda99c-5b24-470d-a686-fea2bb92d258","Type":"ContainerDied","Data":"1503d174cb41d7833714ca4adc7b94dc210f047cc4ffdc62c2d191be079f3ca5"} Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.435266 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fceda99c-5b24-470d-a686-fea2bb92d258","Type":"ContainerDied","Data":"75f03d39772eeb98499ecd24aaf1c34d357bbc8b086fda7adbadd9596c3c6639"} Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.435287 4858 scope.go:117] "RemoveContainer" containerID="3faf8f606e7c009e3df4f356fbb639e7d21dd238168286d18c84f196d211fa84" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.454547 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2","Type":"ContainerStarted","Data":"8478056b72c57814c04e77c8f1551e5f8213a94d14e4ba9711ffcdd3beffb8cb"} Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.454611 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2","Type":"ContainerStarted","Data":"b05011b4d8856f3c88ded5f84d72c82cf4ee12413781e59280032455c42e7635"} Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.454622 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2","Type":"ContainerStarted","Data":"d115610a6a3857dc6090f4438b8953c971cbc6407acc45b750ac4a5a23326dbe"} Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.456411 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.559365 4858 scope.go:117] "RemoveContainer" containerID="1503d174cb41d7833714ca4adc7b94dc210f047cc4ffdc62c2d191be079f3ca5" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.567487 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=2.56746421 podStartE2EDuration="2.56746421s" podCreationTimestamp="2026-02-18 00:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:54:07.511587706 +0000 UTC m=+1200.817424438" watchObservedRunningTime="2026-02-18 00:54:07.56746421 +0000 UTC m=+1200.873300932" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.630694 4858 scope.go:117] "RemoveContainer" containerID="3faf8f606e7c009e3df4f356fbb639e7d21dd238168286d18c84f196d211fa84" Feb 18 00:54:07 crc kubenswrapper[4858]: E0218 00:54:07.635045 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3faf8f606e7c009e3df4f356fbb639e7d21dd238168286d18c84f196d211fa84\": container with ID starting with 3faf8f606e7c009e3df4f356fbb639e7d21dd238168286d18c84f196d211fa84 not found: ID does not exist" containerID="3faf8f606e7c009e3df4f356fbb639e7d21dd238168286d18c84f196d211fa84" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.635156 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3faf8f606e7c009e3df4f356fbb639e7d21dd238168286d18c84f196d211fa84"} err="failed to get container status \"3faf8f606e7c009e3df4f356fbb639e7d21dd238168286d18c84f196d211fa84\": rpc error: code = NotFound desc = could not find container \"3faf8f606e7c009e3df4f356fbb639e7d21dd238168286d18c84f196d211fa84\": container with ID starting with 3faf8f606e7c009e3df4f356fbb639e7d21dd238168286d18c84f196d211fa84 not found: ID does not exist" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.635258 4858 scope.go:117] "RemoveContainer" containerID="1503d174cb41d7833714ca4adc7b94dc210f047cc4ffdc62c2d191be079f3ca5" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.637031 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 00:54:07 crc kubenswrapper[4858]: E0218 00:54:07.641616 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1503d174cb41d7833714ca4adc7b94dc210f047cc4ffdc62c2d191be079f3ca5\": container with ID starting with 1503d174cb41d7833714ca4adc7b94dc210f047cc4ffdc62c2d191be079f3ca5 not found: ID does not exist" containerID="1503d174cb41d7833714ca4adc7b94dc210f047cc4ffdc62c2d191be079f3ca5" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.641664 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1503d174cb41d7833714ca4adc7b94dc210f047cc4ffdc62c2d191be079f3ca5"} err="failed to get container status \"1503d174cb41d7833714ca4adc7b94dc210f047cc4ffdc62c2d191be079f3ca5\": rpc error: code = NotFound desc = could not find container \"1503d174cb41d7833714ca4adc7b94dc210f047cc4ffdc62c2d191be079f3ca5\": container with ID starting with 1503d174cb41d7833714ca4adc7b94dc210f047cc4ffdc62c2d191be079f3ca5 not found: ID does not exist" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.658130 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.666680 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 00:54:07 crc kubenswrapper[4858]: E0218 00:54:07.667129 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fceda99c-5b24-470d-a686-fea2bb92d258" containerName="cinder-scheduler" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.667147 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fceda99c-5b24-470d-a686-fea2bb92d258" containerName="cinder-scheduler" Feb 18 00:54:07 crc kubenswrapper[4858]: E0218 00:54:07.667158 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fceda99c-5b24-470d-a686-fea2bb92d258" containerName="probe" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.667165 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fceda99c-5b24-470d-a686-fea2bb92d258" containerName="probe" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.667370 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="fceda99c-5b24-470d-a686-fea2bb92d258" containerName="probe" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.667391 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="fceda99c-5b24-470d-a686-fea2bb92d258" containerName="cinder-scheduler" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.668454 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.673009 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.674954 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.747329 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6282ef1-5606-4bda-aea6-da44f3b7ddca-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e6282ef1-5606-4bda-aea6-da44f3b7ddca\") " pod="openstack/cinder-scheduler-0" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.747460 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6282ef1-5606-4bda-aea6-da44f3b7ddca-scripts\") pod \"cinder-scheduler-0\" (UID: \"e6282ef1-5606-4bda-aea6-da44f3b7ddca\") " pod="openstack/cinder-scheduler-0" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.747519 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6282ef1-5606-4bda-aea6-da44f3b7ddca-config-data\") pod \"cinder-scheduler-0\" (UID: \"e6282ef1-5606-4bda-aea6-da44f3b7ddca\") " pod="openstack/cinder-scheduler-0" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.755812 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6282ef1-5606-4bda-aea6-da44f3b7ddca-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e6282ef1-5606-4bda-aea6-da44f3b7ddca\") " pod="openstack/cinder-scheduler-0" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.755890 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhlxj\" (UniqueName: \"kubernetes.io/projected/e6282ef1-5606-4bda-aea6-da44f3b7ddca-kube-api-access-hhlxj\") pod \"cinder-scheduler-0\" (UID: \"e6282ef1-5606-4bda-aea6-da44f3b7ddca\") " pod="openstack/cinder-scheduler-0" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.756134 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e6282ef1-5606-4bda-aea6-da44f3b7ddca-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e6282ef1-5606-4bda-aea6-da44f3b7ddca\") " pod="openstack/cinder-scheduler-0" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.857997 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6282ef1-5606-4bda-aea6-da44f3b7ddca-scripts\") pod \"cinder-scheduler-0\" (UID: \"e6282ef1-5606-4bda-aea6-da44f3b7ddca\") " pod="openstack/cinder-scheduler-0" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.858050 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6282ef1-5606-4bda-aea6-da44f3b7ddca-config-data\") pod \"cinder-scheduler-0\" (UID: \"e6282ef1-5606-4bda-aea6-da44f3b7ddca\") " pod="openstack/cinder-scheduler-0" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.858100 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6282ef1-5606-4bda-aea6-da44f3b7ddca-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e6282ef1-5606-4bda-aea6-da44f3b7ddca\") " pod="openstack/cinder-scheduler-0" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.858125 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhlxj\" (UniqueName: \"kubernetes.io/projected/e6282ef1-5606-4bda-aea6-da44f3b7ddca-kube-api-access-hhlxj\") pod \"cinder-scheduler-0\" (UID: \"e6282ef1-5606-4bda-aea6-da44f3b7ddca\") " pod="openstack/cinder-scheduler-0" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.858194 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e6282ef1-5606-4bda-aea6-da44f3b7ddca-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e6282ef1-5606-4bda-aea6-da44f3b7ddca\") " pod="openstack/cinder-scheduler-0" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.858257 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6282ef1-5606-4bda-aea6-da44f3b7ddca-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e6282ef1-5606-4bda-aea6-da44f3b7ddca\") " pod="openstack/cinder-scheduler-0" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.858353 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e6282ef1-5606-4bda-aea6-da44f3b7ddca-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e6282ef1-5606-4bda-aea6-da44f3b7ddca\") " pod="openstack/cinder-scheduler-0" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.864825 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6282ef1-5606-4bda-aea6-da44f3b7ddca-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e6282ef1-5606-4bda-aea6-da44f3b7ddca\") " pod="openstack/cinder-scheduler-0" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.867804 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6282ef1-5606-4bda-aea6-da44f3b7ddca-scripts\") pod \"cinder-scheduler-0\" (UID: \"e6282ef1-5606-4bda-aea6-da44f3b7ddca\") " pod="openstack/cinder-scheduler-0" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.868697 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6282ef1-5606-4bda-aea6-da44f3b7ddca-config-data\") pod \"cinder-scheduler-0\" (UID: \"e6282ef1-5606-4bda-aea6-da44f3b7ddca\") " pod="openstack/cinder-scheduler-0" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.877192 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6282ef1-5606-4bda-aea6-da44f3b7ddca-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e6282ef1-5606-4bda-aea6-da44f3b7ddca\") " pod="openstack/cinder-scheduler-0" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.879919 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhlxj\" (UniqueName: \"kubernetes.io/projected/e6282ef1-5606-4bda-aea6-da44f3b7ddca-kube-api-access-hhlxj\") pod \"cinder-scheduler-0\" (UID: \"e6282ef1-5606-4bda-aea6-da44f3b7ddca\") " pod="openstack/cinder-scheduler-0" Feb 18 00:54:07 crc kubenswrapper[4858]: I0218 00:54:07.985364 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 00:54:08 crc kubenswrapper[4858]: I0218 00:54:08.459714 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-564596946d-g2qdq" Feb 18 00:54:08 crc kubenswrapper[4858]: I0218 00:54:08.520906 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-564596946d-g2qdq" Feb 18 00:54:08 crc kubenswrapper[4858]: I0218 00:54:08.521915 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:54:08 crc kubenswrapper[4858]: I0218 00:54:08.561930 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 00:54:08 crc kubenswrapper[4858]: W0218 00:54:08.593423 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6282ef1_5606_4bda_aea6_da44f3b7ddca.slice/crio-0f8e76af7c98bbb4d35abf21b96a9d629d480bb27f8fb8f1a843ab02dfe28e48 WatchSource:0}: Error finding container 0f8e76af7c98bbb4d35abf21b96a9d629d480bb27f8fb8f1a843ab02dfe28e48: Status 404 returned error can't find the container with id 0f8e76af7c98bbb4d35abf21b96a9d629d480bb27f8fb8f1a843ab02dfe28e48 Feb 18 00:54:08 crc kubenswrapper[4858]: I0218 00:54:08.647068 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5cf649f6f9-dtsbl" Feb 18 00:54:08 crc kubenswrapper[4858]: I0218 00:54:08.649121 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-666bf74cdd-hjbwv" Feb 18 00:54:08 crc kubenswrapper[4858]: I0218 00:54:08.771039 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-564596946d-g2qdq"] Feb 18 00:54:09 crc kubenswrapper[4858]: I0218 00:54:09.446126 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fceda99c-5b24-470d-a686-fea2bb92d258" path="/var/lib/kubelet/pods/fceda99c-5b24-470d-a686-fea2bb92d258/volumes" Feb 18 00:54:09 crc kubenswrapper[4858]: I0218 00:54:09.493999 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e6282ef1-5606-4bda-aea6-da44f3b7ddca","Type":"ContainerStarted","Data":"1d65f591c35f6bab41e7a30107229aba87f484bd1fccfaa609cd7c99c76d2b51"} Feb 18 00:54:09 crc kubenswrapper[4858]: I0218 00:54:09.494058 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e6282ef1-5606-4bda-aea6-da44f3b7ddca","Type":"ContainerStarted","Data":"0f8e76af7c98bbb4d35abf21b96a9d629d480bb27f8fb8f1a843ab02dfe28e48"} Feb 18 00:54:09 crc kubenswrapper[4858]: I0218 00:54:09.861686 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" Feb 18 00:54:09 crc kubenswrapper[4858]: I0218 00:54:09.928181 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-qtmhw"] Feb 18 00:54:09 crc kubenswrapper[4858]: I0218 00:54:09.928412 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" podUID="215e6dbb-5ebf-446a-8326-1e96d37a38c3" containerName="dnsmasq-dns" containerID="cri-o://ca87f25f8139a0c6b9d1c3980ef6cf6225fe3c7b9dea72017e89250425d0003c" gracePeriod=10 Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.526377 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e6282ef1-5606-4bda-aea6-da44f3b7ddca","Type":"ContainerStarted","Data":"fd22364ff2081800441af89e63e429b3672f8950b150b83aadc5585924e98855"} Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.534708 4858 generic.go:334] "Generic (PLEG): container finished" podID="c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e" containerID="71132ec74d595e9c7937d55c7dcf34431c4f64adeed0e8a277fd63e4fd485089" exitCode=0 Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.534766 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e","Type":"ContainerDied","Data":"71132ec74d595e9c7937d55c7dcf34431c4f64adeed0e8a277fd63e4fd485089"} Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.537069 4858 generic.go:334] "Generic (PLEG): container finished" podID="215e6dbb-5ebf-446a-8326-1e96d37a38c3" containerID="ca87f25f8139a0c6b9d1c3980ef6cf6225fe3c7b9dea72017e89250425d0003c" exitCode=0 Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.537274 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-564596946d-g2qdq" podUID="245419e7-d61b-4f15-acef-861d6025e566" containerName="placement-log" containerID="cri-o://e9247ab0bf2a23a169361855ed550448d58ce164c047f2035caed89f549f22c7" gracePeriod=30 Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.537369 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" event={"ID":"215e6dbb-5ebf-446a-8326-1e96d37a38c3","Type":"ContainerDied","Data":"ca87f25f8139a0c6b9d1c3980ef6cf6225fe3c7b9dea72017e89250425d0003c"} Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.537413 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-564596946d-g2qdq" podUID="245419e7-d61b-4f15-acef-861d6025e566" containerName="placement-api" containerID="cri-o://fd458991cc74d4993b215b20b11975d3e1322a8d6c981e44aa8d7c3dfd75cbf3" gracePeriod=30 Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.580576 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.582931 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.587895 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.587924 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-6xwcs" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.588537 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.590650 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.590638652 podStartE2EDuration="3.590638652s" podCreationTimestamp="2026-02-18 00:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:54:10.552069187 +0000 UTC m=+1203.857905919" watchObservedRunningTime="2026-02-18 00:54:10.590638652 +0000 UTC m=+1203.896475384" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.604120 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.638979 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67301629-da8b-43e3-9c9e-fe99444a6ef1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"67301629-da8b-43e3-9c9e-fe99444a6ef1\") " pod="openstack/openstackclient" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.639093 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/67301629-da8b-43e3-9c9e-fe99444a6ef1-openstack-config-secret\") pod \"openstackclient\" (UID: \"67301629-da8b-43e3-9c9e-fe99444a6ef1\") " pod="openstack/openstackclient" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.639195 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k45wr\" (UniqueName: \"kubernetes.io/projected/67301629-da8b-43e3-9c9e-fe99444a6ef1-kube-api-access-k45wr\") pod \"openstackclient\" (UID: \"67301629-da8b-43e3-9c9e-fe99444a6ef1\") " pod="openstack/openstackclient" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.639226 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/67301629-da8b-43e3-9c9e-fe99444a6ef1-openstack-config\") pod \"openstackclient\" (UID: \"67301629-da8b-43e3-9c9e-fe99444a6ef1\") " pod="openstack/openstackclient" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.736880 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.741575 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/67301629-da8b-43e3-9c9e-fe99444a6ef1-openstack-config-secret\") pod \"openstackclient\" (UID: \"67301629-da8b-43e3-9c9e-fe99444a6ef1\") " pod="openstack/openstackclient" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.741682 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k45wr\" (UniqueName: \"kubernetes.io/projected/67301629-da8b-43e3-9c9e-fe99444a6ef1-kube-api-access-k45wr\") pod \"openstackclient\" (UID: \"67301629-da8b-43e3-9c9e-fe99444a6ef1\") " pod="openstack/openstackclient" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.741710 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/67301629-da8b-43e3-9c9e-fe99444a6ef1-openstack-config\") pod \"openstackclient\" (UID: \"67301629-da8b-43e3-9c9e-fe99444a6ef1\") " pod="openstack/openstackclient" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.741774 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67301629-da8b-43e3-9c9e-fe99444a6ef1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"67301629-da8b-43e3-9c9e-fe99444a6ef1\") " pod="openstack/openstackclient" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.742879 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/67301629-da8b-43e3-9c9e-fe99444a6ef1-openstack-config\") pod \"openstackclient\" (UID: \"67301629-da8b-43e3-9c9e-fe99444a6ef1\") " pod="openstack/openstackclient" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.750629 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/67301629-da8b-43e3-9c9e-fe99444a6ef1-openstack-config-secret\") pod \"openstackclient\" (UID: \"67301629-da8b-43e3-9c9e-fe99444a6ef1\") " pod="openstack/openstackclient" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.763859 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67301629-da8b-43e3-9c9e-fe99444a6ef1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"67301629-da8b-43e3-9c9e-fe99444a6ef1\") " pod="openstack/openstackclient" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.764626 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k45wr\" (UniqueName: \"kubernetes.io/projected/67301629-da8b-43e3-9c9e-fe99444a6ef1-kube-api-access-k45wr\") pod \"openstackclient\" (UID: \"67301629-da8b-43e3-9c9e-fe99444a6ef1\") " pod="openstack/openstackclient" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.842731 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-dns-swift-storage-0\") pod \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\" (UID: \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\") " Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.842786 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-dns-svc\") pod \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\" (UID: \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\") " Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.842816 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-config\") pod \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\" (UID: \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\") " Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.842865 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-ovsdbserver-nb\") pod \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\" (UID: \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\") " Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.843368 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-ovsdbserver-sb\") pod \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\" (UID: \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\") " Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.843419 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-282gl\" (UniqueName: \"kubernetes.io/projected/215e6dbb-5ebf-446a-8326-1e96d37a38c3-kube-api-access-282gl\") pod \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\" (UID: \"215e6dbb-5ebf-446a-8326-1e96d37a38c3\") " Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.846391 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.849526 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/215e6dbb-5ebf-446a-8326-1e96d37a38c3-kube-api-access-282gl" (OuterVolumeSpecName: "kube-api-access-282gl") pod "215e6dbb-5ebf-446a-8326-1e96d37a38c3" (UID: "215e6dbb-5ebf-446a-8326-1e96d37a38c3"). InnerVolumeSpecName "kube-api-access-282gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.900092 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-config" (OuterVolumeSpecName: "config") pod "215e6dbb-5ebf-446a-8326-1e96d37a38c3" (UID: "215e6dbb-5ebf-446a-8326-1e96d37a38c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.916142 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "215e6dbb-5ebf-446a-8326-1e96d37a38c3" (UID: "215e6dbb-5ebf-446a-8326-1e96d37a38c3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.925318 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "215e6dbb-5ebf-446a-8326-1e96d37a38c3" (UID: "215e6dbb-5ebf-446a-8326-1e96d37a38c3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.941823 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "215e6dbb-5ebf-446a-8326-1e96d37a38c3" (UID: "215e6dbb-5ebf-446a-8326-1e96d37a38c3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.945075 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-config-data-custom\") pod \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\" (UID: \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\") " Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.945120 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-config-data\") pod \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\" (UID: \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\") " Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.945145 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqphl\" (UniqueName: \"kubernetes.io/projected/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-kube-api-access-cqphl\") pod \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\" (UID: \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\") " Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.945173 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-combined-ca-bundle\") pod \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\" (UID: \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\") " Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.945210 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-scripts\") pod \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\" (UID: \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\") " Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.945251 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-certs\") pod \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\" (UID: \"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e\") " Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.945693 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.945720 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.945729 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.945743 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.945755 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-282gl\" (UniqueName: \"kubernetes.io/projected/215e6dbb-5ebf-446a-8326-1e96d37a38c3-kube-api-access-282gl\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.947439 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e" (UID: "c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.951670 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-kube-api-access-cqphl" (OuterVolumeSpecName: "kube-api-access-cqphl") pod "c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e" (UID: "c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e"). InnerVolumeSpecName "kube-api-access-cqphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.953073 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-certs" (OuterVolumeSpecName: "certs") pod "c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e" (UID: "c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.957687 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-scripts" (OuterVolumeSpecName: "scripts") pod "c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e" (UID: "c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.964346 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "215e6dbb-5ebf-446a-8326-1e96d37a38c3" (UID: "215e6dbb-5ebf-446a-8326-1e96d37a38c3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.983607 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e" (UID: "c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:10 crc kubenswrapper[4858]: I0218 00:54:10.984616 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-config-data" (OuterVolumeSpecName: "config-data") pod "c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e" (UID: "c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.047915 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.047950 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/215e6dbb-5ebf-446a-8326-1e96d37a38c3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.047960 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.047969 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqphl\" (UniqueName: \"kubernetes.io/projected/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-kube-api-access-cqphl\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.047979 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.047987 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.047995 4858 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.051054 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.527053 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 00:54:11 crc kubenswrapper[4858]: W0218 00:54:11.527543 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67301629_da8b_43e3_9c9e_fe99444a6ef1.slice/crio-3611482c4e64c630dd8bc704ef32c9e4133799c6fb3722c102ede27e02af3c86 WatchSource:0}: Error finding container 3611482c4e64c630dd8bc704ef32c9e4133799c6fb3722c102ede27e02af3c86: Status 404 returned error can't find the container with id 3611482c4e64c630dd8bc704ef32c9e4133799c6fb3722c102ede27e02af3c86 Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.550770 4858 generic.go:334] "Generic (PLEG): container finished" podID="245419e7-d61b-4f15-acef-861d6025e566" containerID="e9247ab0bf2a23a169361855ed550448d58ce164c047f2035caed89f549f22c7" exitCode=143 Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.550834 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-564596946d-g2qdq" event={"ID":"245419e7-d61b-4f15-acef-861d6025e566","Type":"ContainerDied","Data":"e9247ab0bf2a23a169361855ed550448d58ce164c047f2035caed89f549f22c7"} Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.553183 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.553184 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-qtmhw" event={"ID":"215e6dbb-5ebf-446a-8326-1e96d37a38c3","Type":"ContainerDied","Data":"9dab76244bfa260fef65d626021d59bac811d8f06f851541af86176a45805f19"} Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.553261 4858 scope.go:117] "RemoveContainer" containerID="ca87f25f8139a0c6b9d1c3980ef6cf6225fe3c7b9dea72017e89250425d0003c" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.563115 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.563122 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e","Type":"ContainerDied","Data":"3d205a2f5802366d09be80f4b32d5aa0033d38e8577d85d14c4f6982c2e71034"} Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.566730 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"67301629-da8b-43e3-9c9e-fe99444a6ef1","Type":"ContainerStarted","Data":"3611482c4e64c630dd8bc704ef32c9e4133799c6fb3722c102ede27e02af3c86"} Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.577310 4858 scope.go:117] "RemoveContainer" containerID="2f47d115501edceea4ec9206ac64df077a75c19704c9691a9f468eb9a806289b" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.580924 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-qtmhw"] Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.597473 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-qtmhw"] Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.609563 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.618363 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.628040 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 18 00:54:11 crc kubenswrapper[4858]: E0218 00:54:11.628409 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="215e6dbb-5ebf-446a-8326-1e96d37a38c3" containerName="init" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.628425 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="215e6dbb-5ebf-446a-8326-1e96d37a38c3" containerName="init" Feb 18 00:54:11 crc kubenswrapper[4858]: E0218 00:54:11.628438 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="215e6dbb-5ebf-446a-8326-1e96d37a38c3" containerName="dnsmasq-dns" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.628446 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="215e6dbb-5ebf-446a-8326-1e96d37a38c3" containerName="dnsmasq-dns" Feb 18 00:54:11 crc kubenswrapper[4858]: E0218 00:54:11.628471 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e" containerName="cloudkitty-proc" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.628476 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e" containerName="cloudkitty-proc" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.628652 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="215e6dbb-5ebf-446a-8326-1e96d37a38c3" containerName="dnsmasq-dns" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.628678 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e" containerName="cloudkitty-proc" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.629333 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.632286 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.643282 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.657760 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24a891be-9404-4083-9503-8935ce9545c0-config-data\") pod \"cloudkitty-proc-0\" (UID: \"24a891be-9404-4083-9503-8935ce9545c0\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.658008 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24a891be-9404-4083-9503-8935ce9545c0-scripts\") pod \"cloudkitty-proc-0\" (UID: \"24a891be-9404-4083-9503-8935ce9545c0\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.658254 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24a891be-9404-4083-9503-8935ce9545c0-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"24a891be-9404-4083-9503-8935ce9545c0\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.658396 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/24a891be-9404-4083-9503-8935ce9545c0-certs\") pod \"cloudkitty-proc-0\" (UID: \"24a891be-9404-4083-9503-8935ce9545c0\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.658472 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24a891be-9404-4083-9503-8935ce9545c0-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"24a891be-9404-4083-9503-8935ce9545c0\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.658756 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7k8w\" (UniqueName: \"kubernetes.io/projected/24a891be-9404-4083-9503-8935ce9545c0-kube-api-access-h7k8w\") pod \"cloudkitty-proc-0\" (UID: \"24a891be-9404-4083-9503-8935ce9545c0\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.662803 4858 scope.go:117] "RemoveContainer" containerID="71132ec74d595e9c7937d55c7dcf34431c4f64adeed0e8a277fd63e4fd485089" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.760923 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7k8w\" (UniqueName: \"kubernetes.io/projected/24a891be-9404-4083-9503-8935ce9545c0-kube-api-access-h7k8w\") pod \"cloudkitty-proc-0\" (UID: \"24a891be-9404-4083-9503-8935ce9545c0\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.761074 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24a891be-9404-4083-9503-8935ce9545c0-config-data\") pod \"cloudkitty-proc-0\" (UID: \"24a891be-9404-4083-9503-8935ce9545c0\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.761107 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24a891be-9404-4083-9503-8935ce9545c0-scripts\") pod \"cloudkitty-proc-0\" (UID: \"24a891be-9404-4083-9503-8935ce9545c0\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.761164 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24a891be-9404-4083-9503-8935ce9545c0-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"24a891be-9404-4083-9503-8935ce9545c0\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.761198 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/24a891be-9404-4083-9503-8935ce9545c0-certs\") pod \"cloudkitty-proc-0\" (UID: \"24a891be-9404-4083-9503-8935ce9545c0\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.761217 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24a891be-9404-4083-9503-8935ce9545c0-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"24a891be-9404-4083-9503-8935ce9545c0\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.767062 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24a891be-9404-4083-9503-8935ce9545c0-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"24a891be-9404-4083-9503-8935ce9545c0\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.771159 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/24a891be-9404-4083-9503-8935ce9545c0-certs\") pod \"cloudkitty-proc-0\" (UID: \"24a891be-9404-4083-9503-8935ce9545c0\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.771380 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24a891be-9404-4083-9503-8935ce9545c0-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"24a891be-9404-4083-9503-8935ce9545c0\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.772403 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24a891be-9404-4083-9503-8935ce9545c0-config-data\") pod \"cloudkitty-proc-0\" (UID: \"24a891be-9404-4083-9503-8935ce9545c0\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.779838 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24a891be-9404-4083-9503-8935ce9545c0-scripts\") pod \"cloudkitty-proc-0\" (UID: \"24a891be-9404-4083-9503-8935ce9545c0\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:54:11 crc kubenswrapper[4858]: I0218 00:54:11.782270 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7k8w\" (UniqueName: \"kubernetes.io/projected/24a891be-9404-4083-9503-8935ce9545c0-kube-api-access-h7k8w\") pod \"cloudkitty-proc-0\" (UID: \"24a891be-9404-4083-9503-8935ce9545c0\") " pod="openstack/cloudkitty-proc-0" Feb 18 00:54:12 crc kubenswrapper[4858]: I0218 00:54:12.004008 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 18 00:54:12 crc kubenswrapper[4858]: I0218 00:54:12.474256 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 18 00:54:12 crc kubenswrapper[4858]: W0218 00:54:12.484880 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24a891be_9404_4083_9503_8935ce9545c0.slice/crio-42746c1b081096443fdee64032bda6ef25c7db91b41ba978f4f2a0e4e095af02 WatchSource:0}: Error finding container 42746c1b081096443fdee64032bda6ef25c7db91b41ba978f4f2a0e4e095af02: Status 404 returned error can't find the container with id 42746c1b081096443fdee64032bda6ef25c7db91b41ba978f4f2a0e4e095af02 Feb 18 00:54:12 crc kubenswrapper[4858]: I0218 00:54:12.592774 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"24a891be-9404-4083-9503-8935ce9545c0","Type":"ContainerStarted","Data":"42746c1b081096443fdee64032bda6ef25c7db91b41ba978f4f2a0e4e095af02"} Feb 18 00:54:12 crc kubenswrapper[4858]: I0218 00:54:12.985886 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 00:54:13 crc kubenswrapper[4858]: I0218 00:54:13.433375 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="215e6dbb-5ebf-446a-8326-1e96d37a38c3" path="/var/lib/kubelet/pods/215e6dbb-5ebf-446a-8326-1e96d37a38c3/volumes" Feb 18 00:54:13 crc kubenswrapper[4858]: I0218 00:54:13.434175 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e" path="/var/lib/kubelet/pods/c40b2f35-ca7b-4cf9-8e7f-90eac6d3552e/volumes" Feb 18 00:54:13 crc kubenswrapper[4858]: I0218 00:54:13.614219 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"24a891be-9404-4083-9503-8935ce9545c0","Type":"ContainerStarted","Data":"e5fffaa91953ac443c60e23c2393658d0f6e543d011ad3e1f4aa90e0db876fbb"} Feb 18 00:54:13 crc kubenswrapper[4858]: I0218 00:54:13.645117 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.645093079 podStartE2EDuration="2.645093079s" podCreationTimestamp="2026-02-18 00:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:54:13.640612951 +0000 UTC m=+1206.946449683" watchObservedRunningTime="2026-02-18 00:54:13.645093079 +0000 UTC m=+1206.950929811" Feb 18 00:54:14 crc kubenswrapper[4858]: I0218 00:54:14.674846 4858 generic.go:334] "Generic (PLEG): container finished" podID="245419e7-d61b-4f15-acef-861d6025e566" containerID="fd458991cc74d4993b215b20b11975d3e1322a8d6c981e44aa8d7c3dfd75cbf3" exitCode=0 Feb 18 00:54:14 crc kubenswrapper[4858]: I0218 00:54:14.675999 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-564596946d-g2qdq" event={"ID":"245419e7-d61b-4f15-acef-861d6025e566","Type":"ContainerDied","Data":"fd458991cc74d4993b215b20b11975d3e1322a8d6c981e44aa8d7c3dfd75cbf3"} Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.094382 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-564596946d-g2qdq" Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.238132 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/245419e7-d61b-4f15-acef-861d6025e566-logs\") pod \"245419e7-d61b-4f15-acef-861d6025e566\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.238188 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-config-data\") pod \"245419e7-d61b-4f15-acef-861d6025e566\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.238216 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-internal-tls-certs\") pod \"245419e7-d61b-4f15-acef-861d6025e566\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.238319 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-scripts\") pod \"245419e7-d61b-4f15-acef-861d6025e566\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.238341 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-public-tls-certs\") pod \"245419e7-d61b-4f15-acef-861d6025e566\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.238366 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xpv4\" (UniqueName: \"kubernetes.io/projected/245419e7-d61b-4f15-acef-861d6025e566-kube-api-access-8xpv4\") pod \"245419e7-d61b-4f15-acef-861d6025e566\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.238434 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-combined-ca-bundle\") pod \"245419e7-d61b-4f15-acef-861d6025e566\" (UID: \"245419e7-d61b-4f15-acef-861d6025e566\") " Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.246811 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/245419e7-d61b-4f15-acef-861d6025e566-logs" (OuterVolumeSpecName: "logs") pod "245419e7-d61b-4f15-acef-861d6025e566" (UID: "245419e7-d61b-4f15-acef-861d6025e566"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.268064 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-scripts" (OuterVolumeSpecName: "scripts") pod "245419e7-d61b-4f15-acef-861d6025e566" (UID: "245419e7-d61b-4f15-acef-861d6025e566"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.269705 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/245419e7-d61b-4f15-acef-861d6025e566-kube-api-access-8xpv4" (OuterVolumeSpecName: "kube-api-access-8xpv4") pod "245419e7-d61b-4f15-acef-861d6025e566" (UID: "245419e7-d61b-4f15-acef-861d6025e566"). InnerVolumeSpecName "kube-api-access-8xpv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.342955 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xpv4\" (UniqueName: \"kubernetes.io/projected/245419e7-d61b-4f15-acef-861d6025e566-kube-api-access-8xpv4\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.343002 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/245419e7-d61b-4f15-acef-861d6025e566-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.343014 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.373682 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "245419e7-d61b-4f15-acef-861d6025e566" (UID: "245419e7-d61b-4f15-acef-861d6025e566"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.406621 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-config-data" (OuterVolumeSpecName: "config-data") pod "245419e7-d61b-4f15-acef-861d6025e566" (UID: "245419e7-d61b-4f15-acef-861d6025e566"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.446847 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.446879 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.457966 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "245419e7-d61b-4f15-acef-861d6025e566" (UID: "245419e7-d61b-4f15-acef-861d6025e566"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.486234 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "245419e7-d61b-4f15-acef-861d6025e566" (UID: "245419e7-d61b-4f15-acef-861d6025e566"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.548292 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.548324 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/245419e7-d61b-4f15-acef-861d6025e566-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.740734 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-564596946d-g2qdq" event={"ID":"245419e7-d61b-4f15-acef-861d6025e566","Type":"ContainerDied","Data":"f2761e1eb357780e2e98285fa77664e075491de1f67e657b2ec1a5b35e23be07"} Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.740789 4858 scope.go:117] "RemoveContainer" containerID="fd458991cc74d4993b215b20b11975d3e1322a8d6c981e44aa8d7c3dfd75cbf3" Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.740966 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-564596946d-g2qdq" Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.775633 4858 scope.go:117] "RemoveContainer" containerID="e9247ab0bf2a23a169361855ed550448d58ce164c047f2035caed89f549f22c7" Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.792557 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-564596946d-g2qdq"] Feb 18 00:54:15 crc kubenswrapper[4858]: I0218 00:54:15.804350 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-564596946d-g2qdq"] Feb 18 00:54:16 crc kubenswrapper[4858]: I0218 00:54:16.145185 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:54:16 crc kubenswrapper[4858]: I0218 00:54:16.145446 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20c1fed5-9e72-4c0d-8bf3-e664aee2516b" containerName="ceilometer-central-agent" containerID="cri-o://64f63d278ba85e7f0144883748bd873af083da72c36460255b86e1249e068ded" gracePeriod=30 Feb 18 00:54:16 crc kubenswrapper[4858]: I0218 00:54:16.145537 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20c1fed5-9e72-4c0d-8bf3-e664aee2516b" containerName="proxy-httpd" containerID="cri-o://4e2066a343c11600c1d4acbe3deef0499800cb8653fd90413858b80c55e96e62" gracePeriod=30 Feb 18 00:54:16 crc kubenswrapper[4858]: I0218 00:54:16.145574 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20c1fed5-9e72-4c0d-8bf3-e664aee2516b" containerName="sg-core" containerID="cri-o://6e0a466378eeae49668cc5eec4f3ccb009683e62b40bcae1e70b2c8ee9b4f062" gracePeriod=30 Feb 18 00:54:16 crc kubenswrapper[4858]: I0218 00:54:16.145584 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20c1fed5-9e72-4c0d-8bf3-e664aee2516b" containerName="ceilometer-notification-agent" containerID="cri-o://71a7331a81a352e14cd4e8919ff0b00ce81ddf7283b980a14490e23d4537ad08" gracePeriod=30 Feb 18 00:54:16 crc kubenswrapper[4858]: I0218 00:54:16.165324 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="20c1fed5-9e72-4c0d-8bf3-e664aee2516b" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.190:3000/\": EOF" Feb 18 00:54:16 crc kubenswrapper[4858]: I0218 00:54:16.760857 4858 generic.go:334] "Generic (PLEG): container finished" podID="20c1fed5-9e72-4c0d-8bf3-e664aee2516b" containerID="4e2066a343c11600c1d4acbe3deef0499800cb8653fd90413858b80c55e96e62" exitCode=0 Feb 18 00:54:16 crc kubenswrapper[4858]: I0218 00:54:16.761083 4858 generic.go:334] "Generic (PLEG): container finished" podID="20c1fed5-9e72-4c0d-8bf3-e664aee2516b" containerID="6e0a466378eeae49668cc5eec4f3ccb009683e62b40bcae1e70b2c8ee9b4f062" exitCode=2 Feb 18 00:54:16 crc kubenswrapper[4858]: I0218 00:54:16.761093 4858 generic.go:334] "Generic (PLEG): container finished" podID="20c1fed5-9e72-4c0d-8bf3-e664aee2516b" containerID="71a7331a81a352e14cd4e8919ff0b00ce81ddf7283b980a14490e23d4537ad08" exitCode=0 Feb 18 00:54:16 crc kubenswrapper[4858]: I0218 00:54:16.761100 4858 generic.go:334] "Generic (PLEG): container finished" podID="20c1fed5-9e72-4c0d-8bf3-e664aee2516b" containerID="64f63d278ba85e7f0144883748bd873af083da72c36460255b86e1249e068ded" exitCode=0 Feb 18 00:54:16 crc kubenswrapper[4858]: I0218 00:54:16.761117 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20c1fed5-9e72-4c0d-8bf3-e664aee2516b","Type":"ContainerDied","Data":"4e2066a343c11600c1d4acbe3deef0499800cb8653fd90413858b80c55e96e62"} Feb 18 00:54:16 crc kubenswrapper[4858]: I0218 00:54:16.761136 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20c1fed5-9e72-4c0d-8bf3-e664aee2516b","Type":"ContainerDied","Data":"6e0a466378eeae49668cc5eec4f3ccb009683e62b40bcae1e70b2c8ee9b4f062"} Feb 18 00:54:16 crc kubenswrapper[4858]: I0218 00:54:16.761145 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20c1fed5-9e72-4c0d-8bf3-e664aee2516b","Type":"ContainerDied","Data":"71a7331a81a352e14cd4e8919ff0b00ce81ddf7283b980a14490e23d4537ad08"} Feb 18 00:54:16 crc kubenswrapper[4858]: I0218 00:54:16.761154 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20c1fed5-9e72-4c0d-8bf3-e664aee2516b","Type":"ContainerDied","Data":"64f63d278ba85e7f0144883748bd873af083da72c36460255b86e1249e068ded"} Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.105775 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.183219 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-run-httpd\") pod \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.183345 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-sg-core-conf-yaml\") pod \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.183469 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-combined-ca-bundle\") pod \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.183869 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "20c1fed5-9e72-4c0d-8bf3-e664aee2516b" (UID: "20c1fed5-9e72-4c0d-8bf3-e664aee2516b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.189807 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-log-httpd\") pod \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.189869 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-config-data\") pod \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.189941 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-scripts\") pod \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.189977 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjt4f\" (UniqueName: \"kubernetes.io/projected/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-kube-api-access-zjt4f\") pod \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\" (UID: \"20c1fed5-9e72-4c0d-8bf3-e664aee2516b\") " Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.190841 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.202897 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "20c1fed5-9e72-4c0d-8bf3-e664aee2516b" (UID: "20c1fed5-9e72-4c0d-8bf3-e664aee2516b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.207168 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-kube-api-access-zjt4f" (OuterVolumeSpecName: "kube-api-access-zjt4f") pod "20c1fed5-9e72-4c0d-8bf3-e664aee2516b" (UID: "20c1fed5-9e72-4c0d-8bf3-e664aee2516b"). InnerVolumeSpecName "kube-api-access-zjt4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.238471 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-scripts" (OuterVolumeSpecName: "scripts") pod "20c1fed5-9e72-4c0d-8bf3-e664aee2516b" (UID: "20c1fed5-9e72-4c0d-8bf3-e664aee2516b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.287651 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "20c1fed5-9e72-4c0d-8bf3-e664aee2516b" (UID: "20c1fed5-9e72-4c0d-8bf3-e664aee2516b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.293310 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.293361 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.293371 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.293379 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjt4f\" (UniqueName: \"kubernetes.io/projected/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-kube-api-access-zjt4f\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.414345 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-config-data" (OuterVolumeSpecName: "config-data") pod "20c1fed5-9e72-4c0d-8bf3-e664aee2516b" (UID: "20c1fed5-9e72-4c0d-8bf3-e664aee2516b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.423279 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20c1fed5-9e72-4c0d-8bf3-e664aee2516b" (UID: "20c1fed5-9e72-4c0d-8bf3-e664aee2516b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.440483 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="245419e7-d61b-4f15-acef-861d6025e566" path="/var/lib/kubelet/pods/245419e7-d61b-4f15-acef-861d6025e566/volumes" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.497219 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.497245 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20c1fed5-9e72-4c0d-8bf3-e664aee2516b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.772835 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20c1fed5-9e72-4c0d-8bf3-e664aee2516b","Type":"ContainerDied","Data":"7be519fd096586cc8d01c83014b5ed553acd5ce1907fc2170d966080e3f9daa1"} Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.772897 4858 scope.go:117] "RemoveContainer" containerID="4e2066a343c11600c1d4acbe3deef0499800cb8653fd90413858b80c55e96e62" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.772926 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.800467 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.813923 4858 scope.go:117] "RemoveContainer" containerID="6e0a466378eeae49668cc5eec4f3ccb009683e62b40bcae1e70b2c8ee9b4f062" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.821846 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.838146 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:54:17 crc kubenswrapper[4858]: E0218 00:54:17.838728 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="245419e7-d61b-4f15-acef-861d6025e566" containerName="placement-api" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.838750 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="245419e7-d61b-4f15-acef-861d6025e566" containerName="placement-api" Feb 18 00:54:17 crc kubenswrapper[4858]: E0218 00:54:17.838773 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20c1fed5-9e72-4c0d-8bf3-e664aee2516b" containerName="sg-core" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.838781 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="20c1fed5-9e72-4c0d-8bf3-e664aee2516b" containerName="sg-core" Feb 18 00:54:17 crc kubenswrapper[4858]: E0218 00:54:17.838815 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20c1fed5-9e72-4c0d-8bf3-e664aee2516b" containerName="ceilometer-central-agent" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.838824 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="20c1fed5-9e72-4c0d-8bf3-e664aee2516b" containerName="ceilometer-central-agent" Feb 18 00:54:17 crc kubenswrapper[4858]: E0218 00:54:17.838835 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20c1fed5-9e72-4c0d-8bf3-e664aee2516b" containerName="proxy-httpd" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.838843 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="20c1fed5-9e72-4c0d-8bf3-e664aee2516b" containerName="proxy-httpd" Feb 18 00:54:17 crc kubenswrapper[4858]: E0218 00:54:17.838863 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="245419e7-d61b-4f15-acef-861d6025e566" containerName="placement-log" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.838873 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="245419e7-d61b-4f15-acef-861d6025e566" containerName="placement-log" Feb 18 00:54:17 crc kubenswrapper[4858]: E0218 00:54:17.838909 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20c1fed5-9e72-4c0d-8bf3-e664aee2516b" containerName="ceilometer-notification-agent" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.838918 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="20c1fed5-9e72-4c0d-8bf3-e664aee2516b" containerName="ceilometer-notification-agent" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.839132 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="245419e7-d61b-4f15-acef-861d6025e566" containerName="placement-log" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.839149 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="20c1fed5-9e72-4c0d-8bf3-e664aee2516b" containerName="ceilometer-notification-agent" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.839164 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="245419e7-d61b-4f15-acef-861d6025e566" containerName="placement-api" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.839178 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="20c1fed5-9e72-4c0d-8bf3-e664aee2516b" containerName="ceilometer-central-agent" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.839195 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="20c1fed5-9e72-4c0d-8bf3-e664aee2516b" containerName="sg-core" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.839218 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="20c1fed5-9e72-4c0d-8bf3-e664aee2516b" containerName="proxy-httpd" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.841444 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.850259 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.850524 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.855577 4858 scope.go:117] "RemoveContainer" containerID="71a7331a81a352e14cd4e8919ff0b00ce81ddf7283b980a14490e23d4537ad08" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.877041 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.903769 4858 scope.go:117] "RemoveContainer" containerID="64f63d278ba85e7f0144883748bd873af083da72c36460255b86e1249e068ded" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.905111 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " pod="openstack/ceilometer-0" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.905173 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-scripts\") pod \"ceilometer-0\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " pod="openstack/ceilometer-0" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.905222 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-config-data\") pod \"ceilometer-0\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " pod="openstack/ceilometer-0" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.905244 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-run-httpd\") pod \"ceilometer-0\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " pod="openstack/ceilometer-0" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.905273 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lvqj\" (UniqueName: \"kubernetes.io/projected/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-kube-api-access-2lvqj\") pod \"ceilometer-0\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " pod="openstack/ceilometer-0" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.905306 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " pod="openstack/ceilometer-0" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.905367 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-log-httpd\") pod \"ceilometer-0\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " pod="openstack/ceilometer-0" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.989037 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-77fb7b987-d9jrg"] Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.990812 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.999081 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.999393 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 00:54:17 crc kubenswrapper[4858]: I0218 00:54:17.999871 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-77fb7b987-d9jrg"] Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.017313 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.017843 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " pod="openstack/ceilometer-0" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.017889 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-scripts\") pod \"ceilometer-0\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " pod="openstack/ceilometer-0" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.017937 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-config-data\") pod \"ceilometer-0\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " pod="openstack/ceilometer-0" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.017958 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-run-httpd\") pod \"ceilometer-0\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " pod="openstack/ceilometer-0" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.017983 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lvqj\" (UniqueName: \"kubernetes.io/projected/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-kube-api-access-2lvqj\") pod \"ceilometer-0\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " pod="openstack/ceilometer-0" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.018016 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " pod="openstack/ceilometer-0" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.018074 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-log-httpd\") pod \"ceilometer-0\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " pod="openstack/ceilometer-0" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.018526 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-log-httpd\") pod \"ceilometer-0\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " pod="openstack/ceilometer-0" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.018796 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-run-httpd\") pod \"ceilometer-0\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " pod="openstack/ceilometer-0" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.024233 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-config-data\") pod \"ceilometer-0\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " pod="openstack/ceilometer-0" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.025725 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " pod="openstack/ceilometer-0" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.028166 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " pod="openstack/ceilometer-0" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.028305 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-scripts\") pod \"ceilometer-0\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " pod="openstack/ceilometer-0" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.040467 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lvqj\" (UniqueName: \"kubernetes.io/projected/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-kube-api-access-2lvqj\") pod \"ceilometer-0\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " pod="openstack/ceilometer-0" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.119367 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6d69568-ccdd-4684-bc2b-6b6893923701-internal-tls-certs\") pod \"swift-proxy-77fb7b987-d9jrg\" (UID: \"b6d69568-ccdd-4684-bc2b-6b6893923701\") " pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.119402 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b6d69568-ccdd-4684-bc2b-6b6893923701-run-httpd\") pod \"swift-proxy-77fb7b987-d9jrg\" (UID: \"b6d69568-ccdd-4684-bc2b-6b6893923701\") " pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.119429 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6d69568-ccdd-4684-bc2b-6b6893923701-config-data\") pod \"swift-proxy-77fb7b987-d9jrg\" (UID: \"b6d69568-ccdd-4684-bc2b-6b6893923701\") " pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.119451 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6d69568-ccdd-4684-bc2b-6b6893923701-combined-ca-bundle\") pod \"swift-proxy-77fb7b987-d9jrg\" (UID: \"b6d69568-ccdd-4684-bc2b-6b6893923701\") " pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.119598 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b6d69568-ccdd-4684-bc2b-6b6893923701-etc-swift\") pod \"swift-proxy-77fb7b987-d9jrg\" (UID: \"b6d69568-ccdd-4684-bc2b-6b6893923701\") " pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.119614 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b6d69568-ccdd-4684-bc2b-6b6893923701-log-httpd\") pod \"swift-proxy-77fb7b987-d9jrg\" (UID: \"b6d69568-ccdd-4684-bc2b-6b6893923701\") " pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.119650 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhz5s\" (UniqueName: \"kubernetes.io/projected/b6d69568-ccdd-4684-bc2b-6b6893923701-kube-api-access-rhz5s\") pod \"swift-proxy-77fb7b987-d9jrg\" (UID: \"b6d69568-ccdd-4684-bc2b-6b6893923701\") " pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.119675 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6d69568-ccdd-4684-bc2b-6b6893923701-public-tls-certs\") pod \"swift-proxy-77fb7b987-d9jrg\" (UID: \"b6d69568-ccdd-4684-bc2b-6b6893923701\") " pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.167375 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.221712 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhz5s\" (UniqueName: \"kubernetes.io/projected/b6d69568-ccdd-4684-bc2b-6b6893923701-kube-api-access-rhz5s\") pod \"swift-proxy-77fb7b987-d9jrg\" (UID: \"b6d69568-ccdd-4684-bc2b-6b6893923701\") " pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.221771 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6d69568-ccdd-4684-bc2b-6b6893923701-public-tls-certs\") pod \"swift-proxy-77fb7b987-d9jrg\" (UID: \"b6d69568-ccdd-4684-bc2b-6b6893923701\") " pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.222334 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6d69568-ccdd-4684-bc2b-6b6893923701-internal-tls-certs\") pod \"swift-proxy-77fb7b987-d9jrg\" (UID: \"b6d69568-ccdd-4684-bc2b-6b6893923701\") " pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.222364 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b6d69568-ccdd-4684-bc2b-6b6893923701-run-httpd\") pod \"swift-proxy-77fb7b987-d9jrg\" (UID: \"b6d69568-ccdd-4684-bc2b-6b6893923701\") " pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.222400 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6d69568-ccdd-4684-bc2b-6b6893923701-config-data\") pod \"swift-proxy-77fb7b987-d9jrg\" (UID: \"b6d69568-ccdd-4684-bc2b-6b6893923701\") " pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.222426 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6d69568-ccdd-4684-bc2b-6b6893923701-combined-ca-bundle\") pod \"swift-proxy-77fb7b987-d9jrg\" (UID: \"b6d69568-ccdd-4684-bc2b-6b6893923701\") " pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.222536 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b6d69568-ccdd-4684-bc2b-6b6893923701-etc-swift\") pod \"swift-proxy-77fb7b987-d9jrg\" (UID: \"b6d69568-ccdd-4684-bc2b-6b6893923701\") " pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.222562 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b6d69568-ccdd-4684-bc2b-6b6893923701-log-httpd\") pod \"swift-proxy-77fb7b987-d9jrg\" (UID: \"b6d69568-ccdd-4684-bc2b-6b6893923701\") " pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.223050 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b6d69568-ccdd-4684-bc2b-6b6893923701-log-httpd\") pod \"swift-proxy-77fb7b987-d9jrg\" (UID: \"b6d69568-ccdd-4684-bc2b-6b6893923701\") " pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.224813 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b6d69568-ccdd-4684-bc2b-6b6893923701-run-httpd\") pod \"swift-proxy-77fb7b987-d9jrg\" (UID: \"b6d69568-ccdd-4684-bc2b-6b6893923701\") " pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.235206 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6d69568-ccdd-4684-bc2b-6b6893923701-combined-ca-bundle\") pod \"swift-proxy-77fb7b987-d9jrg\" (UID: \"b6d69568-ccdd-4684-bc2b-6b6893923701\") " pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.236215 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6d69568-ccdd-4684-bc2b-6b6893923701-internal-tls-certs\") pod \"swift-proxy-77fb7b987-d9jrg\" (UID: \"b6d69568-ccdd-4684-bc2b-6b6893923701\") " pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.237096 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b6d69568-ccdd-4684-bc2b-6b6893923701-etc-swift\") pod \"swift-proxy-77fb7b987-d9jrg\" (UID: \"b6d69568-ccdd-4684-bc2b-6b6893923701\") " pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.237442 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6d69568-ccdd-4684-bc2b-6b6893923701-config-data\") pod \"swift-proxy-77fb7b987-d9jrg\" (UID: \"b6d69568-ccdd-4684-bc2b-6b6893923701\") " pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.240026 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6d69568-ccdd-4684-bc2b-6b6893923701-public-tls-certs\") pod \"swift-proxy-77fb7b987-d9jrg\" (UID: \"b6d69568-ccdd-4684-bc2b-6b6893923701\") " pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.240411 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhz5s\" (UniqueName: \"kubernetes.io/projected/b6d69568-ccdd-4684-bc2b-6b6893923701-kube-api-access-rhz5s\") pod \"swift-proxy-77fb7b987-d9jrg\" (UID: \"b6d69568-ccdd-4684-bc2b-6b6893923701\") " pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.247090 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.399753 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.689962 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:54:18 crc kubenswrapper[4858]: W0218 00:54:18.701047 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3986e5f_5d3b_4d6f_ac9b_80d135d4d530.slice/crio-afe294d5725588db3596fae26f3e3f3bbe0e42af4ad5b264cf3b24bf996fe8bb WatchSource:0}: Error finding container afe294d5725588db3596fae26f3e3f3bbe0e42af4ad5b264cf3b24bf996fe8bb: Status 404 returned error can't find the container with id afe294d5725588db3596fae26f3e3f3bbe0e42af4ad5b264cf3b24bf996fe8bb Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.718278 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-79f994c65-x27nl" Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.773360 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5fcf66f4c6-vkspn"] Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.773592 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5fcf66f4c6-vkspn" podUID="d2ca2386-8f22-41a1-87ad-7b9ff91754ef" containerName="neutron-api" containerID="cri-o://9edd90bbbbb35663d5f161377118de61616fcf832527ac5a73b726c83781b6ce" gracePeriod=30 Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.773950 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5fcf66f4c6-vkspn" podUID="d2ca2386-8f22-41a1-87ad-7b9ff91754ef" containerName="neutron-httpd" containerID="cri-o://a128da7152c208000c536c7699b24204df2cda117d1aeef05c3105b57de627ee" gracePeriod=30 Feb 18 00:54:18 crc kubenswrapper[4858]: I0218 00:54:18.799454 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530","Type":"ContainerStarted","Data":"afe294d5725588db3596fae26f3e3f3bbe0e42af4ad5b264cf3b24bf996fe8bb"} Feb 18 00:54:19 crc kubenswrapper[4858]: I0218 00:54:19.006376 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-77fb7b987-d9jrg"] Feb 18 00:54:19 crc kubenswrapper[4858]: W0218 00:54:19.027844 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6d69568_ccdd_4684_bc2b_6b6893923701.slice/crio-87bd2463a10c670a29de84145880b8ddfbee0b3af96cb80ec9fd4d13fb325d3f WatchSource:0}: Error finding container 87bd2463a10c670a29de84145880b8ddfbee0b3af96cb80ec9fd4d13fb325d3f: Status 404 returned error can't find the container with id 87bd2463a10c670a29de84145880b8ddfbee0b3af96cb80ec9fd4d13fb325d3f Feb 18 00:54:19 crc kubenswrapper[4858]: I0218 00:54:19.431747 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20c1fed5-9e72-4c0d-8bf3-e664aee2516b" path="/var/lib/kubelet/pods/20c1fed5-9e72-4c0d-8bf3-e664aee2516b/volumes" Feb 18 00:54:19 crc kubenswrapper[4858]: I0218 00:54:19.809716 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530","Type":"ContainerStarted","Data":"19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4"} Feb 18 00:54:19 crc kubenswrapper[4858]: I0218 00:54:19.811144 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77fb7b987-d9jrg" event={"ID":"b6d69568-ccdd-4684-bc2b-6b6893923701","Type":"ContainerStarted","Data":"3d595e5c83be94d69996d0305060192f8749a107e126f3405258c28a8a366c6c"} Feb 18 00:54:19 crc kubenswrapper[4858]: I0218 00:54:19.811169 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77fb7b987-d9jrg" event={"ID":"b6d69568-ccdd-4684-bc2b-6b6893923701","Type":"ContainerStarted","Data":"c0f0b82f16b51f9420b5430c3b92b363dab7938ec1910c5b4a8a511d04926b62"} Feb 18 00:54:19 crc kubenswrapper[4858]: I0218 00:54:19.811182 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77fb7b987-d9jrg" event={"ID":"b6d69568-ccdd-4684-bc2b-6b6893923701","Type":"ContainerStarted","Data":"87bd2463a10c670a29de84145880b8ddfbee0b3af96cb80ec9fd4d13fb325d3f"} Feb 18 00:54:19 crc kubenswrapper[4858]: I0218 00:54:19.811285 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:19 crc kubenswrapper[4858]: I0218 00:54:19.813228 4858 generic.go:334] "Generic (PLEG): container finished" podID="d2ca2386-8f22-41a1-87ad-7b9ff91754ef" containerID="a128da7152c208000c536c7699b24204df2cda117d1aeef05c3105b57de627ee" exitCode=0 Feb 18 00:54:19 crc kubenswrapper[4858]: I0218 00:54:19.813267 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fcf66f4c6-vkspn" event={"ID":"d2ca2386-8f22-41a1-87ad-7b9ff91754ef","Type":"ContainerDied","Data":"a128da7152c208000c536c7699b24204df2cda117d1aeef05c3105b57de627ee"} Feb 18 00:54:19 crc kubenswrapper[4858]: I0218 00:54:19.864039 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-77fb7b987-d9jrg" podStartSLOduration=2.864020602 podStartE2EDuration="2.864020602s" podCreationTimestamp="2026-02-18 00:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:54:19.834581649 +0000 UTC m=+1213.140418381" watchObservedRunningTime="2026-02-18 00:54:19.864020602 +0000 UTC m=+1213.169857334" Feb 18 00:54:20 crc kubenswrapper[4858]: I0218 00:54:20.821642 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:22 crc kubenswrapper[4858]: I0218 00:54:22.973400 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:54:24 crc kubenswrapper[4858]: I0218 00:54:24.864059 4858 generic.go:334] "Generic (PLEG): container finished" podID="d2ca2386-8f22-41a1-87ad-7b9ff91754ef" containerID="9edd90bbbbb35663d5f161377118de61616fcf832527ac5a73b726c83781b6ce" exitCode=0 Feb 18 00:54:24 crc kubenswrapper[4858]: I0218 00:54:24.864138 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fcf66f4c6-vkspn" event={"ID":"d2ca2386-8f22-41a1-87ad-7b9ff91754ef","Type":"ContainerDied","Data":"9edd90bbbbb35663d5f161377118de61616fcf832527ac5a73b726c83781b6ce"} Feb 18 00:54:25 crc kubenswrapper[4858]: I0218 00:54:25.281506 4858 scope.go:117] "RemoveContainer" containerID="5eb2202882517a06bafb0fb5c0291a7a488a591448fa0884f7dc9650280dd1a1" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.137725 4858 scope.go:117] "RemoveContainer" containerID="f807440ea4f3dd849a4fc175dd3ddcc5f862ff6faf6206be89049dc1ba290d8d" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.293940 4858 scope.go:117] "RemoveContainer" containerID="b0c2d5bca6148c44cf1784c32e2d9f475624a18b26205b7c2df51f73342a890a" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.601813 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.701004 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48tgr\" (UniqueName: \"kubernetes.io/projected/1e6d2753-32f3-48a3-9908-a82350f136dc-kube-api-access-48tgr\") pod \"1e6d2753-32f3-48a3-9908-a82350f136dc\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.701081 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e6d2753-32f3-48a3-9908-a82350f136dc-config-data\") pod \"1e6d2753-32f3-48a3-9908-a82350f136dc\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.701135 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e6d2753-32f3-48a3-9908-a82350f136dc-scripts\") pod \"1e6d2753-32f3-48a3-9908-a82350f136dc\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.701153 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e6d2753-32f3-48a3-9908-a82350f136dc-combined-ca-bundle\") pod \"1e6d2753-32f3-48a3-9908-a82350f136dc\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.701348 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1e6d2753-32f3-48a3-9908-a82350f136dc-etc-machine-id\") pod \"1e6d2753-32f3-48a3-9908-a82350f136dc\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.701425 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e6d2753-32f3-48a3-9908-a82350f136dc-logs\") pod \"1e6d2753-32f3-48a3-9908-a82350f136dc\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.701546 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e6d2753-32f3-48a3-9908-a82350f136dc-config-data-custom\") pod \"1e6d2753-32f3-48a3-9908-a82350f136dc\" (UID: \"1e6d2753-32f3-48a3-9908-a82350f136dc\") " Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.701629 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e6d2753-32f3-48a3-9908-a82350f136dc-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1e6d2753-32f3-48a3-9908-a82350f136dc" (UID: "1e6d2753-32f3-48a3-9908-a82350f136dc"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.702067 4858 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1e6d2753-32f3-48a3-9908-a82350f136dc-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.702229 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e6d2753-32f3-48a3-9908-a82350f136dc-logs" (OuterVolumeSpecName: "logs") pod "1e6d2753-32f3-48a3-9908-a82350f136dc" (UID: "1e6d2753-32f3-48a3-9908-a82350f136dc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.708395 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e6d2753-32f3-48a3-9908-a82350f136dc-scripts" (OuterVolumeSpecName: "scripts") pod "1e6d2753-32f3-48a3-9908-a82350f136dc" (UID: "1e6d2753-32f3-48a3-9908-a82350f136dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.709571 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e6d2753-32f3-48a3-9908-a82350f136dc-kube-api-access-48tgr" (OuterVolumeSpecName: "kube-api-access-48tgr") pod "1e6d2753-32f3-48a3-9908-a82350f136dc" (UID: "1e6d2753-32f3-48a3-9908-a82350f136dc"). InnerVolumeSpecName "kube-api-access-48tgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.711076 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e6d2753-32f3-48a3-9908-a82350f136dc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1e6d2753-32f3-48a3-9908-a82350f136dc" (UID: "1e6d2753-32f3-48a3-9908-a82350f136dc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.755148 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e6d2753-32f3-48a3-9908-a82350f136dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e6d2753-32f3-48a3-9908-a82350f136dc" (UID: "1e6d2753-32f3-48a3-9908-a82350f136dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.774458 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e6d2753-32f3-48a3-9908-a82350f136dc-config-data" (OuterVolumeSpecName: "config-data") pod "1e6d2753-32f3-48a3-9908-a82350f136dc" (UID: "1e6d2753-32f3-48a3-9908-a82350f136dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.800280 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fcf66f4c6-vkspn" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.803610 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48tgr\" (UniqueName: \"kubernetes.io/projected/1e6d2753-32f3-48a3-9908-a82350f136dc-kube-api-access-48tgr\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.803638 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e6d2753-32f3-48a3-9908-a82350f136dc-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.803650 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e6d2753-32f3-48a3-9908-a82350f136dc-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.803662 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e6d2753-32f3-48a3-9908-a82350f136dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.803675 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e6d2753-32f3-48a3-9908-a82350f136dc-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.803685 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e6d2753-32f3-48a3-9908-a82350f136dc-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.904363 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-httpd-config\") pod \"d2ca2386-8f22-41a1-87ad-7b9ff91754ef\" (UID: \"d2ca2386-8f22-41a1-87ad-7b9ff91754ef\") " Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.904628 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-combined-ca-bundle\") pod \"d2ca2386-8f22-41a1-87ad-7b9ff91754ef\" (UID: \"d2ca2386-8f22-41a1-87ad-7b9ff91754ef\") " Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.904655 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-ovndb-tls-certs\") pod \"d2ca2386-8f22-41a1-87ad-7b9ff91754ef\" (UID: \"d2ca2386-8f22-41a1-87ad-7b9ff91754ef\") " Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.904680 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-config\") pod \"d2ca2386-8f22-41a1-87ad-7b9ff91754ef\" (UID: \"d2ca2386-8f22-41a1-87ad-7b9ff91754ef\") " Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.904701 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b9xp\" (UniqueName: \"kubernetes.io/projected/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-kube-api-access-7b9xp\") pod \"d2ca2386-8f22-41a1-87ad-7b9ff91754ef\" (UID: \"d2ca2386-8f22-41a1-87ad-7b9ff91754ef\") " Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.909716 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "d2ca2386-8f22-41a1-87ad-7b9ff91754ef" (UID: "d2ca2386-8f22-41a1-87ad-7b9ff91754ef"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.909931 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-kube-api-access-7b9xp" (OuterVolumeSpecName: "kube-api-access-7b9xp") pod "d2ca2386-8f22-41a1-87ad-7b9ff91754ef" (UID: "d2ca2386-8f22-41a1-87ad-7b9ff91754ef"). InnerVolumeSpecName "kube-api-access-7b9xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.916823 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"67301629-da8b-43e3-9c9e-fe99444a6ef1","Type":"ContainerStarted","Data":"60a03b5b1bef90e2e84e1e6e67e22bd06fcb8e70fa736c01094384cdd9ad4430"} Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.924337 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530","Type":"ContainerStarted","Data":"2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f"} Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.930441 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fcf66f4c6-vkspn" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.931608 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fcf66f4c6-vkspn" event={"ID":"d2ca2386-8f22-41a1-87ad-7b9ff91754ef","Type":"ContainerDied","Data":"1a66a225282f2210b9456c2ad2f79621cb750f8b27208395511bef1f3ee805ee"} Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.932093 4858 scope.go:117] "RemoveContainer" containerID="a128da7152c208000c536c7699b24204df2cda117d1aeef05c3105b57de627ee" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.938033 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.174135644 podStartE2EDuration="16.938015129s" podCreationTimestamp="2026-02-18 00:54:10 +0000 UTC" firstStartedPulling="2026-02-18 00:54:11.530272386 +0000 UTC m=+1204.836109118" lastFinishedPulling="2026-02-18 00:54:26.294151871 +0000 UTC m=+1219.599988603" observedRunningTime="2026-02-18 00:54:26.93020494 +0000 UTC m=+1220.236041672" watchObservedRunningTime="2026-02-18 00:54:26.938015129 +0000 UTC m=+1220.243851861" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.946358 4858 generic.go:334] "Generic (PLEG): container finished" podID="1e6d2753-32f3-48a3-9908-a82350f136dc" containerID="377325ddb4b3feb7046ed085c20ae24ce7067df4c3cdb079783ee2acbaa98d07" exitCode=137 Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.946619 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1e6d2753-32f3-48a3-9908-a82350f136dc","Type":"ContainerDied","Data":"377325ddb4b3feb7046ed085c20ae24ce7067df4c3cdb079783ee2acbaa98d07"} Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.946731 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1e6d2753-32f3-48a3-9908-a82350f136dc","Type":"ContainerDied","Data":"4a8d715c74e37b4a39486bc4a6034f5fcb0acd1075be4b0db9e2f97419e3ce94"} Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.946876 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.965546 4858 scope.go:117] "RemoveContainer" containerID="9edd90bbbbb35663d5f161377118de61616fcf832527ac5a73b726c83781b6ce" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.993927 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2ca2386-8f22-41a1-87ad-7b9ff91754ef" (UID: "d2ca2386-8f22-41a1-87ad-7b9ff91754ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:26 crc kubenswrapper[4858]: I0218 00:54:26.996548 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.008952 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.008978 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b9xp\" (UniqueName: \"kubernetes.io/projected/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-kube-api-access-7b9xp\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.008988 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.020079 4858 scope.go:117] "RemoveContainer" containerID="377325ddb4b3feb7046ed085c20ae24ce7067df4c3cdb079783ee2acbaa98d07" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.022461 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.030060 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-config" (OuterVolumeSpecName: "config") pod "d2ca2386-8f22-41a1-87ad-7b9ff91754ef" (UID: "d2ca2386-8f22-41a1-87ad-7b9ff91754ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.031298 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 00:54:27 crc kubenswrapper[4858]: E0218 00:54:27.031724 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6d2753-32f3-48a3-9908-a82350f136dc" containerName="cinder-api" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.031744 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6d2753-32f3-48a3-9908-a82350f136dc" containerName="cinder-api" Feb 18 00:54:27 crc kubenswrapper[4858]: E0218 00:54:27.031762 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2ca2386-8f22-41a1-87ad-7b9ff91754ef" containerName="neutron-httpd" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.031768 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ca2386-8f22-41a1-87ad-7b9ff91754ef" containerName="neutron-httpd" Feb 18 00:54:27 crc kubenswrapper[4858]: E0218 00:54:27.031782 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6d2753-32f3-48a3-9908-a82350f136dc" containerName="cinder-api-log" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.031788 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6d2753-32f3-48a3-9908-a82350f136dc" containerName="cinder-api-log" Feb 18 00:54:27 crc kubenswrapper[4858]: E0218 00:54:27.031803 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2ca2386-8f22-41a1-87ad-7b9ff91754ef" containerName="neutron-api" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.031809 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ca2386-8f22-41a1-87ad-7b9ff91754ef" containerName="neutron-api" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.031999 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e6d2753-32f3-48a3-9908-a82350f136dc" containerName="cinder-api" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.032016 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2ca2386-8f22-41a1-87ad-7b9ff91754ef" containerName="neutron-api" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.032025 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2ca2386-8f22-41a1-87ad-7b9ff91754ef" containerName="neutron-httpd" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.032045 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e6d2753-32f3-48a3-9908-a82350f136dc" containerName="cinder-api-log" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.033064 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.035728 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.035898 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.036002 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.040873 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.059215 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d2ca2386-8f22-41a1-87ad-7b9ff91754ef" (UID: "d2ca2386-8f22-41a1-87ad-7b9ff91754ef"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.068402 4858 scope.go:117] "RemoveContainer" containerID="3f265aca5779bea6b29bb21912cf73581ab48cd0f41876743bdef54de473abce" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.090691 4858 scope.go:117] "RemoveContainer" containerID="377325ddb4b3feb7046ed085c20ae24ce7067df4c3cdb079783ee2acbaa98d07" Feb 18 00:54:27 crc kubenswrapper[4858]: E0218 00:54:27.091113 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"377325ddb4b3feb7046ed085c20ae24ce7067df4c3cdb079783ee2acbaa98d07\": container with ID starting with 377325ddb4b3feb7046ed085c20ae24ce7067df4c3cdb079783ee2acbaa98d07 not found: ID does not exist" containerID="377325ddb4b3feb7046ed085c20ae24ce7067df4c3cdb079783ee2acbaa98d07" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.091160 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"377325ddb4b3feb7046ed085c20ae24ce7067df4c3cdb079783ee2acbaa98d07"} err="failed to get container status \"377325ddb4b3feb7046ed085c20ae24ce7067df4c3cdb079783ee2acbaa98d07\": rpc error: code = NotFound desc = could not find container \"377325ddb4b3feb7046ed085c20ae24ce7067df4c3cdb079783ee2acbaa98d07\": container with ID starting with 377325ddb4b3feb7046ed085c20ae24ce7067df4c3cdb079783ee2acbaa98d07 not found: ID does not exist" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.091198 4858 scope.go:117] "RemoveContainer" containerID="3f265aca5779bea6b29bb21912cf73581ab48cd0f41876743bdef54de473abce" Feb 18 00:54:27 crc kubenswrapper[4858]: E0218 00:54:27.091620 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f265aca5779bea6b29bb21912cf73581ab48cd0f41876743bdef54de473abce\": container with ID starting with 3f265aca5779bea6b29bb21912cf73581ab48cd0f41876743bdef54de473abce not found: ID does not exist" containerID="3f265aca5779bea6b29bb21912cf73581ab48cd0f41876743bdef54de473abce" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.091668 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f265aca5779bea6b29bb21912cf73581ab48cd0f41876743bdef54de473abce"} err="failed to get container status \"3f265aca5779bea6b29bb21912cf73581ab48cd0f41876743bdef54de473abce\": rpc error: code = NotFound desc = could not find container \"3f265aca5779bea6b29bb21912cf73581ab48cd0f41876743bdef54de473abce\": container with ID starting with 3f265aca5779bea6b29bb21912cf73581ab48cd0f41876743bdef54de473abce not found: ID does not exist" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.111285 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.111425 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxhgb\" (UniqueName: \"kubernetes.io/projected/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-kube-api-access-gxhgb\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.111718 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-logs\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.111757 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.111834 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-config-data-custom\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.111876 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-config-data\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.111929 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.112001 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.112053 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-scripts\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.112158 4858 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.112169 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d2ca2386-8f22-41a1-87ad-7b9ff91754ef-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.213647 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-config-data\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.213719 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.213778 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.213820 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-scripts\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.213862 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.213970 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxhgb\" (UniqueName: \"kubernetes.io/projected/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-kube-api-access-gxhgb\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.214046 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-logs\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.214071 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.214116 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-config-data-custom\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.215079 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.215403 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-logs\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.219083 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: E0218 00:54:27.220086 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e6d2753_32f3_48a3_9908_a82350f136dc.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e6d2753_32f3_48a3_9908_a82350f136dc.slice/crio-4a8d715c74e37b4a39486bc4a6034f5fcb0acd1075be4b0db9e2f97419e3ce94\": RecentStats: unable to find data in memory cache]" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.220651 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-config-data-custom\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.221275 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-scripts\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.222573 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.224236 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-config-data\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.224405 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.238575 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxhgb\" (UniqueName: \"kubernetes.io/projected/2c83714c-1da1-4e6f-81a6-310d3bc6ec44-kube-api-access-gxhgb\") pod \"cinder-api-0\" (UID: \"2c83714c-1da1-4e6f-81a6-310d3bc6ec44\") " pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.365883 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.372576 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5fcf66f4c6-vkspn"] Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.381357 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5fcf66f4c6-vkspn"] Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.483661 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e6d2753-32f3-48a3-9908-a82350f136dc" path="/var/lib/kubelet/pods/1e6d2753-32f3-48a3-9908-a82350f136dc/volumes" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.486442 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2ca2386-8f22-41a1-87ad-7b9ff91754ef" path="/var/lib/kubelet/pods/d2ca2386-8f22-41a1-87ad-7b9ff91754ef/volumes" Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.873449 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.886515 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.889080 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="263681ae-36ff-4a39-8e4c-1971633851ee" containerName="glance-log" containerID="cri-o://1f6b2284735f799f240e4d480cbcf152034af2f5fa8b634f38348028a5fc3972" gracePeriod=30 Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.889593 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="263681ae-36ff-4a39-8e4c-1971633851ee" containerName="glance-httpd" containerID="cri-o://fd94bfaca7bd87c61434531ee1d443692bedf62d48845ab6d8bed005492c97a9" gracePeriod=30 Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.970068 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530","Type":"ContainerStarted","Data":"47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317"} Feb 18 00:54:27 crc kubenswrapper[4858]: I0218 00:54:27.973345 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2c83714c-1da1-4e6f-81a6-310d3bc6ec44","Type":"ContainerStarted","Data":"f57e7e4ec9fdbd83b3141f0122f3fe9de96878cb2a2ff4e14fa691045eb86d72"} Feb 18 00:54:28 crc kubenswrapper[4858]: I0218 00:54:28.405509 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:28 crc kubenswrapper[4858]: I0218 00:54:28.409965 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-77fb7b987-d9jrg" Feb 18 00:54:28 crc kubenswrapper[4858]: I0218 00:54:28.849686 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 00:54:28 crc kubenswrapper[4858]: I0218 00:54:28.851267 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="877dd3dc-e90c-4751-9650-17e13a905e75" containerName="glance-httpd" containerID="cri-o://6cd94bd78ba937632315375bbdd6808fd6c156139a11596f39886415b9f38a6f" gracePeriod=30 Feb 18 00:54:28 crc kubenswrapper[4858]: I0218 00:54:28.851598 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="877dd3dc-e90c-4751-9650-17e13a905e75" containerName="glance-log" containerID="cri-o://e50737d2634a0884d1f67434f82949f5adc239249cb256377b6b080f5b7742e8" gracePeriod=30 Feb 18 00:54:28 crc kubenswrapper[4858]: I0218 00:54:28.983344 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2c83714c-1da1-4e6f-81a6-310d3bc6ec44","Type":"ContainerStarted","Data":"8cee6d2425bd01439d3357c0cf52d3895ba36698bf44772daf2efb76bfe25bf6"} Feb 18 00:54:28 crc kubenswrapper[4858]: I0218 00:54:28.986730 4858 generic.go:334] "Generic (PLEG): container finished" podID="877dd3dc-e90c-4751-9650-17e13a905e75" containerID="e50737d2634a0884d1f67434f82949f5adc239249cb256377b6b080f5b7742e8" exitCode=143 Feb 18 00:54:28 crc kubenswrapper[4858]: I0218 00:54:28.986785 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"877dd3dc-e90c-4751-9650-17e13a905e75","Type":"ContainerDied","Data":"e50737d2634a0884d1f67434f82949f5adc239249cb256377b6b080f5b7742e8"} Feb 18 00:54:28 crc kubenswrapper[4858]: I0218 00:54:28.996241 4858 generic.go:334] "Generic (PLEG): container finished" podID="263681ae-36ff-4a39-8e4c-1971633851ee" containerID="1f6b2284735f799f240e4d480cbcf152034af2f5fa8b634f38348028a5fc3972" exitCode=143 Feb 18 00:54:28 crc kubenswrapper[4858]: I0218 00:54:28.996341 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"263681ae-36ff-4a39-8e4c-1971633851ee","Type":"ContainerDied","Data":"1f6b2284735f799f240e4d480cbcf152034af2f5fa8b634f38348028a5fc3972"} Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.007118 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2c83714c-1da1-4e6f-81a6-310d3bc6ec44","Type":"ContainerStarted","Data":"230a1c844d29fb5e14f31cff31b11ea6acadc1747a8c437be67588cfe275ba01"} Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.007524 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.009930 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530","Type":"ContainerStarted","Data":"2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c"} Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.010055 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" containerName="ceilometer-central-agent" containerID="cri-o://19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4" gracePeriod=30 Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.010322 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.010367 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" containerName="proxy-httpd" containerID="cri-o://2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c" gracePeriod=30 Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.010408 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" containerName="sg-core" containerID="cri-o://47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317" gracePeriod=30 Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.010438 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" containerName="ceilometer-notification-agent" containerID="cri-o://2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f" gracePeriod=30 Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.044774 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.044756034 podStartE2EDuration="4.044756034s" podCreationTimestamp="2026-02-18 00:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:54:30.028158662 +0000 UTC m=+1223.333995394" watchObservedRunningTime="2026-02-18 00:54:30.044756034 +0000 UTC m=+1223.350592766" Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.070049 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.019566141 podStartE2EDuration="13.070029857s" podCreationTimestamp="2026-02-18 00:54:17 +0000 UTC" firstStartedPulling="2026-02-18 00:54:18.706689665 +0000 UTC m=+1212.012526397" lastFinishedPulling="2026-02-18 00:54:28.757153381 +0000 UTC m=+1222.062990113" observedRunningTime="2026-02-18 00:54:30.063005196 +0000 UTC m=+1223.368841938" watchObservedRunningTime="2026-02-18 00:54:30.070029857 +0000 UTC m=+1223.375866589" Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.715071 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.805637 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-combined-ca-bundle\") pod \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.805697 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-config-data\") pod \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.805750 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lvqj\" (UniqueName: \"kubernetes.io/projected/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-kube-api-access-2lvqj\") pod \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.805804 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-log-httpd\") pod \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.805827 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-run-httpd\") pod \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.806035 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-scripts\") pod \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.806060 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-sg-core-conf-yaml\") pod \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\" (UID: \"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530\") " Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.806540 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" (UID: "d3986e5f-5d3b-4d6f-ac9b-80d135d4d530"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.806641 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" (UID: "d3986e5f-5d3b-4d6f-ac9b-80d135d4d530"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.813615 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-kube-api-access-2lvqj" (OuterVolumeSpecName: "kube-api-access-2lvqj") pod "d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" (UID: "d3986e5f-5d3b-4d6f-ac9b-80d135d4d530"). InnerVolumeSpecName "kube-api-access-2lvqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.817572 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-scripts" (OuterVolumeSpecName: "scripts") pod "d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" (UID: "d3986e5f-5d3b-4d6f-ac9b-80d135d4d530"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.850237 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" (UID: "d3986e5f-5d3b-4d6f-ac9b-80d135d4d530"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.883663 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" (UID: "d3986e5f-5d3b-4d6f-ac9b-80d135d4d530"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.908777 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.908822 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lvqj\" (UniqueName: \"kubernetes.io/projected/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-kube-api-access-2lvqj\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.908837 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.908848 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.908859 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.908871 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:30 crc kubenswrapper[4858]: I0218 00:54:30.918889 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-config-data" (OuterVolumeSpecName: "config-data") pod "d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" (UID: "d3986e5f-5d3b-4d6f-ac9b-80d135d4d530"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.011136 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.026429 4858 generic.go:334] "Generic (PLEG): container finished" podID="d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" containerID="2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c" exitCode=0 Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.026463 4858 generic.go:334] "Generic (PLEG): container finished" podID="d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" containerID="47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317" exitCode=2 Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.026473 4858 generic.go:334] "Generic (PLEG): container finished" podID="d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" containerID="2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f" exitCode=0 Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.026486 4858 generic.go:334] "Generic (PLEG): container finished" podID="d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" containerID="19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4" exitCode=0 Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.026524 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.026548 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530","Type":"ContainerDied","Data":"2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c"} Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.026616 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530","Type":"ContainerDied","Data":"47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317"} Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.026627 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530","Type":"ContainerDied","Data":"2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f"} Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.026641 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530","Type":"ContainerDied","Data":"19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4"} Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.026650 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3986e5f-5d3b-4d6f-ac9b-80d135d4d530","Type":"ContainerDied","Data":"afe294d5725588db3596fae26f3e3f3bbe0e42af4ad5b264cf3b24bf996fe8bb"} Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.026672 4858 scope.go:117] "RemoveContainer" containerID="2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.068805 4858 scope.go:117] "RemoveContainer" containerID="47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.087553 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.100720 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="263681ae-36ff-4a39-8e4c-1971633851ee" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.168:9292/healthcheck\": read tcp 10.217.0.2:49770->10.217.0.168:9292: read: connection reset by peer" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.101026 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="263681ae-36ff-4a39-8e4c-1971633851ee" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.168:9292/healthcheck\": read tcp 10.217.0.2:49764->10.217.0.168:9292: read: connection reset by peer" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.116532 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.133533 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:54:31 crc kubenswrapper[4858]: E0218 00:54:31.133884 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" containerName="ceilometer-notification-agent" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.133900 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" containerName="ceilometer-notification-agent" Feb 18 00:54:31 crc kubenswrapper[4858]: E0218 00:54:31.133917 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" containerName="proxy-httpd" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.133923 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" containerName="proxy-httpd" Feb 18 00:54:31 crc kubenswrapper[4858]: E0218 00:54:31.133936 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" containerName="sg-core" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.133941 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" containerName="sg-core" Feb 18 00:54:31 crc kubenswrapper[4858]: E0218 00:54:31.133970 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" containerName="ceilometer-central-agent" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.133978 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" containerName="ceilometer-central-agent" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.134192 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" containerName="ceilometer-central-agent" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.134206 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" containerName="sg-core" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.134213 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" containerName="proxy-httpd" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.134232 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" containerName="ceilometer-notification-agent" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.134746 4858 scope.go:117] "RemoveContainer" containerID="2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.135984 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.138880 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.139060 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.144916 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.191661 4858 scope.go:117] "RemoveContainer" containerID="19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.233774 4858 scope.go:117] "RemoveContainer" containerID="2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c" Feb 18 00:54:31 crc kubenswrapper[4858]: E0218 00:54:31.234220 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c\": container with ID starting with 2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c not found: ID does not exist" containerID="2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.234276 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c"} err="failed to get container status \"2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c\": rpc error: code = NotFound desc = could not find container \"2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c\": container with ID starting with 2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c not found: ID does not exist" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.234332 4858 scope.go:117] "RemoveContainer" containerID="47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317" Feb 18 00:54:31 crc kubenswrapper[4858]: E0218 00:54:31.234944 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317\": container with ID starting with 47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317 not found: ID does not exist" containerID="47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.235010 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317"} err="failed to get container status \"47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317\": rpc error: code = NotFound desc = could not find container \"47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317\": container with ID starting with 47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317 not found: ID does not exist" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.235037 4858 scope.go:117] "RemoveContainer" containerID="2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f" Feb 18 00:54:31 crc kubenswrapper[4858]: E0218 00:54:31.235419 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f\": container with ID starting with 2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f not found: ID does not exist" containerID="2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.235450 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f"} err="failed to get container status \"2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f\": rpc error: code = NotFound desc = could not find container \"2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f\": container with ID starting with 2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f not found: ID does not exist" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.235471 4858 scope.go:117] "RemoveContainer" containerID="19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4" Feb 18 00:54:31 crc kubenswrapper[4858]: E0218 00:54:31.235768 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4\": container with ID starting with 19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4 not found: ID does not exist" containerID="19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.235793 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4"} err="failed to get container status \"19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4\": rpc error: code = NotFound desc = could not find container \"19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4\": container with ID starting with 19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4 not found: ID does not exist" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.235825 4858 scope.go:117] "RemoveContainer" containerID="2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.236042 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c"} err="failed to get container status \"2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c\": rpc error: code = NotFound desc = could not find container \"2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c\": container with ID starting with 2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c not found: ID does not exist" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.236063 4858 scope.go:117] "RemoveContainer" containerID="47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.236860 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317"} err="failed to get container status \"47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317\": rpc error: code = NotFound desc = could not find container \"47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317\": container with ID starting with 47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317 not found: ID does not exist" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.236882 4858 scope.go:117] "RemoveContainer" containerID="2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.237132 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f"} err="failed to get container status \"2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f\": rpc error: code = NotFound desc = could not find container \"2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f\": container with ID starting with 2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f not found: ID does not exist" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.237159 4858 scope.go:117] "RemoveContainer" containerID="19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.237533 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4"} err="failed to get container status \"19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4\": rpc error: code = NotFound desc = could not find container \"19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4\": container with ID starting with 19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4 not found: ID does not exist" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.237554 4858 scope.go:117] "RemoveContainer" containerID="2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.237775 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c"} err="failed to get container status \"2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c\": rpc error: code = NotFound desc = could not find container \"2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c\": container with ID starting with 2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c not found: ID does not exist" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.237800 4858 scope.go:117] "RemoveContainer" containerID="47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.238001 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317"} err="failed to get container status \"47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317\": rpc error: code = NotFound desc = could not find container \"47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317\": container with ID starting with 47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317 not found: ID does not exist" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.238022 4858 scope.go:117] "RemoveContainer" containerID="2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.238260 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f"} err="failed to get container status \"2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f\": rpc error: code = NotFound desc = could not find container \"2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f\": container with ID starting with 2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f not found: ID does not exist" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.238299 4858 scope.go:117] "RemoveContainer" containerID="19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.238569 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4"} err="failed to get container status \"19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4\": rpc error: code = NotFound desc = could not find container \"19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4\": container with ID starting with 19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4 not found: ID does not exist" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.238588 4858 scope.go:117] "RemoveContainer" containerID="2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.240036 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c"} err="failed to get container status \"2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c\": rpc error: code = NotFound desc = could not find container \"2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c\": container with ID starting with 2951ea1dc8a77d95c32ec4df9eb6dd92f94b03fdbb265fb48753258790d3af3c not found: ID does not exist" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.240057 4858 scope.go:117] "RemoveContainer" containerID="47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.240457 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317"} err="failed to get container status \"47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317\": rpc error: code = NotFound desc = could not find container \"47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317\": container with ID starting with 47811df9cf73409327b322b8795721b9135825d0b7696ba3a7f5c86b8db27317 not found: ID does not exist" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.240478 4858 scope.go:117] "RemoveContainer" containerID="2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.240696 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f"} err="failed to get container status \"2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f\": rpc error: code = NotFound desc = could not find container \"2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f\": container with ID starting with 2cdb83572484d91fcf906040bdd375901d3621fa3238047295057d3437334a7f not found: ID does not exist" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.240723 4858 scope.go:117] "RemoveContainer" containerID="19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.240962 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4"} err="failed to get container status \"19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4\": rpc error: code = NotFound desc = could not find container \"19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4\": container with ID starting with 19bd3df23fd16487bcfbf9b189a7de4e51a540f47505d792d8cfd359d56b3fc4 not found: ID does not exist" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.315939 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klvjj\" (UniqueName: \"kubernetes.io/projected/30527726-4c4d-455a-a73a-7e4e909e6f13-kube-api-access-klvjj\") pod \"ceilometer-0\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " pod="openstack/ceilometer-0" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.316118 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30527726-4c4d-455a-a73a-7e4e909e6f13-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " pod="openstack/ceilometer-0" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.316155 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30527726-4c4d-455a-a73a-7e4e909e6f13-run-httpd\") pod \"ceilometer-0\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " pod="openstack/ceilometer-0" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.316191 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30527726-4c4d-455a-a73a-7e4e909e6f13-scripts\") pod \"ceilometer-0\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " pod="openstack/ceilometer-0" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.316365 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30527726-4c4d-455a-a73a-7e4e909e6f13-config-data\") pod \"ceilometer-0\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " pod="openstack/ceilometer-0" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.316439 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30527726-4c4d-455a-a73a-7e4e909e6f13-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " pod="openstack/ceilometer-0" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.316556 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30527726-4c4d-455a-a73a-7e4e909e6f13-log-httpd\") pod \"ceilometer-0\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " pod="openstack/ceilometer-0" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.417984 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30527726-4c4d-455a-a73a-7e4e909e6f13-scripts\") pod \"ceilometer-0\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " pod="openstack/ceilometer-0" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.418056 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30527726-4c4d-455a-a73a-7e4e909e6f13-config-data\") pod \"ceilometer-0\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " pod="openstack/ceilometer-0" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.418084 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30527726-4c4d-455a-a73a-7e4e909e6f13-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " pod="openstack/ceilometer-0" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.418114 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30527726-4c4d-455a-a73a-7e4e909e6f13-log-httpd\") pod \"ceilometer-0\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " pod="openstack/ceilometer-0" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.418149 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klvjj\" (UniqueName: \"kubernetes.io/projected/30527726-4c4d-455a-a73a-7e4e909e6f13-kube-api-access-klvjj\") pod \"ceilometer-0\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " pod="openstack/ceilometer-0" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.418232 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30527726-4c4d-455a-a73a-7e4e909e6f13-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " pod="openstack/ceilometer-0" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.418256 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30527726-4c4d-455a-a73a-7e4e909e6f13-run-httpd\") pod \"ceilometer-0\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " pod="openstack/ceilometer-0" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.419476 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30527726-4c4d-455a-a73a-7e4e909e6f13-run-httpd\") pod \"ceilometer-0\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " pod="openstack/ceilometer-0" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.420734 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30527726-4c4d-455a-a73a-7e4e909e6f13-log-httpd\") pod \"ceilometer-0\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " pod="openstack/ceilometer-0" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.424203 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30527726-4c4d-455a-a73a-7e4e909e6f13-scripts\") pod \"ceilometer-0\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " pod="openstack/ceilometer-0" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.424849 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30527726-4c4d-455a-a73a-7e4e909e6f13-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " pod="openstack/ceilometer-0" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.424851 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30527726-4c4d-455a-a73a-7e4e909e6f13-config-data\") pod \"ceilometer-0\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " pod="openstack/ceilometer-0" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.426340 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30527726-4c4d-455a-a73a-7e4e909e6f13-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " pod="openstack/ceilometer-0" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.441385 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3986e5f-5d3b-4d6f-ac9b-80d135d4d530" path="/var/lib/kubelet/pods/d3986e5f-5d3b-4d6f-ac9b-80d135d4d530/volumes" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.446552 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klvjj\" (UniqueName: \"kubernetes.io/projected/30527726-4c4d-455a-a73a-7e4e909e6f13-kube-api-access-klvjj\") pod \"ceilometer-0\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " pod="openstack/ceilometer-0" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.492892 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.675465 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.825938 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/263681ae-36ff-4a39-8e4c-1971633851ee-scripts\") pod \"263681ae-36ff-4a39-8e4c-1971633851ee\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.826012 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/263681ae-36ff-4a39-8e4c-1971633851ee-logs\") pod \"263681ae-36ff-4a39-8e4c-1971633851ee\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.826052 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/263681ae-36ff-4a39-8e4c-1971633851ee-public-tls-certs\") pod \"263681ae-36ff-4a39-8e4c-1971633851ee\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.826108 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/263681ae-36ff-4a39-8e4c-1971633851ee-config-data\") pod \"263681ae-36ff-4a39-8e4c-1971633851ee\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.826178 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw99h\" (UniqueName: \"kubernetes.io/projected/263681ae-36ff-4a39-8e4c-1971633851ee-kube-api-access-bw99h\") pod \"263681ae-36ff-4a39-8e4c-1971633851ee\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.826240 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/263681ae-36ff-4a39-8e4c-1971633851ee-httpd-run\") pod \"263681ae-36ff-4a39-8e4c-1971633851ee\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.826397 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\") pod \"263681ae-36ff-4a39-8e4c-1971633851ee\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.826474 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/263681ae-36ff-4a39-8e4c-1971633851ee-combined-ca-bundle\") pod \"263681ae-36ff-4a39-8e4c-1971633851ee\" (UID: \"263681ae-36ff-4a39-8e4c-1971633851ee\") " Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.826582 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/263681ae-36ff-4a39-8e4c-1971633851ee-logs" (OuterVolumeSpecName: "logs") pod "263681ae-36ff-4a39-8e4c-1971633851ee" (UID: "263681ae-36ff-4a39-8e4c-1971633851ee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.826612 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/263681ae-36ff-4a39-8e4c-1971633851ee-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "263681ae-36ff-4a39-8e4c-1971633851ee" (UID: "263681ae-36ff-4a39-8e4c-1971633851ee"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.827126 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/263681ae-36ff-4a39-8e4c-1971633851ee-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.827153 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/263681ae-36ff-4a39-8e4c-1971633851ee-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.831779 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/263681ae-36ff-4a39-8e4c-1971633851ee-kube-api-access-bw99h" (OuterVolumeSpecName: "kube-api-access-bw99h") pod "263681ae-36ff-4a39-8e4c-1971633851ee" (UID: "263681ae-36ff-4a39-8e4c-1971633851ee"). InnerVolumeSpecName "kube-api-access-bw99h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.835309 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/263681ae-36ff-4a39-8e4c-1971633851ee-scripts" (OuterVolumeSpecName: "scripts") pod "263681ae-36ff-4a39-8e4c-1971633851ee" (UID: "263681ae-36ff-4a39-8e4c-1971633851ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.845795 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ced7117e-c471-49ff-8f11-c2333cc7f770" (OuterVolumeSpecName: "glance") pod "263681ae-36ff-4a39-8e4c-1971633851ee" (UID: "263681ae-36ff-4a39-8e4c-1971633851ee"). InnerVolumeSpecName "pvc-ced7117e-c471-49ff-8f11-c2333cc7f770". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.854898 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/263681ae-36ff-4a39-8e4c-1971633851ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "263681ae-36ff-4a39-8e4c-1971633851ee" (UID: "263681ae-36ff-4a39-8e4c-1971633851ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.881144 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/263681ae-36ff-4a39-8e4c-1971633851ee-config-data" (OuterVolumeSpecName: "config-data") pod "263681ae-36ff-4a39-8e4c-1971633851ee" (UID: "263681ae-36ff-4a39-8e4c-1971633851ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.892231 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/263681ae-36ff-4a39-8e4c-1971633851ee-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "263681ae-36ff-4a39-8e4c-1971633851ee" (UID: "263681ae-36ff-4a39-8e4c-1971633851ee"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.928581 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/263681ae-36ff-4a39-8e4c-1971633851ee-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.928614 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/263681ae-36ff-4a39-8e4c-1971633851ee-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.928624 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/263681ae-36ff-4a39-8e4c-1971633851ee-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.928635 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw99h\" (UniqueName: \"kubernetes.io/projected/263681ae-36ff-4a39-8e4c-1971633851ee-kube-api-access-bw99h\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.928670 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\") on node \"crc\" " Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.928686 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/263681ae-36ff-4a39-8e4c-1971633851ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.952320 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.957138 4858 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 00:54:31 crc kubenswrapper[4858]: I0218 00:54:31.957443 4858 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ced7117e-c471-49ff-8f11-c2333cc7f770" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ced7117e-c471-49ff-8f11-c2333cc7f770") on node "crc" Feb 18 00:54:31 crc kubenswrapper[4858]: W0218 00:54:31.959248 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30527726_4c4d_455a_a73a_7e4e909e6f13.slice/crio-49612f417922945a5911aa18d8a298e4bb92b9d72b326147f8579f195413c86b WatchSource:0}: Error finding container 49612f417922945a5911aa18d8a298e4bb92b9d72b326147f8579f195413c86b: Status 404 returned error can't find the container with id 49612f417922945a5911aa18d8a298e4bb92b9d72b326147f8579f195413c86b Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.030085 4858 reconciler_common.go:293] "Volume detached for volume \"pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.039687 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30527726-4c4d-455a-a73a-7e4e909e6f13","Type":"ContainerStarted","Data":"49612f417922945a5911aa18d8a298e4bb92b9d72b326147f8579f195413c86b"} Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.041683 4858 generic.go:334] "Generic (PLEG): container finished" podID="263681ae-36ff-4a39-8e4c-1971633851ee" containerID="fd94bfaca7bd87c61434531ee1d443692bedf62d48845ab6d8bed005492c97a9" exitCode=0 Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.041746 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.041722 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"263681ae-36ff-4a39-8e4c-1971633851ee","Type":"ContainerDied","Data":"fd94bfaca7bd87c61434531ee1d443692bedf62d48845ab6d8bed005492c97a9"} Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.042138 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"263681ae-36ff-4a39-8e4c-1971633851ee","Type":"ContainerDied","Data":"5c42157a0489161c34ec6ce3d067fcbe87e70d31cd85baba564e165b2ca55a2d"} Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.042159 4858 scope.go:117] "RemoveContainer" containerID="fd94bfaca7bd87c61434531ee1d443692bedf62d48845ab6d8bed005492c97a9" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.081290 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.090014 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.101917 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 00:54:32 crc kubenswrapper[4858]: E0218 00:54:32.105062 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="263681ae-36ff-4a39-8e4c-1971633851ee" containerName="glance-httpd" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.105084 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="263681ae-36ff-4a39-8e4c-1971633851ee" containerName="glance-httpd" Feb 18 00:54:32 crc kubenswrapper[4858]: E0218 00:54:32.105109 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="263681ae-36ff-4a39-8e4c-1971633851ee" containerName="glance-log" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.105116 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="263681ae-36ff-4a39-8e4c-1971633851ee" containerName="glance-log" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.105298 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="263681ae-36ff-4a39-8e4c-1971633851ee" containerName="glance-httpd" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.105326 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="263681ae-36ff-4a39-8e4c-1971633851ee" containerName="glance-log" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.106739 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.109336 4858 scope.go:117] "RemoveContainer" containerID="1f6b2284735f799f240e4d480cbcf152034af2f5fa8b634f38348028a5fc3972" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.109589 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.109741 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.120284 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.154201 4858 scope.go:117] "RemoveContainer" containerID="fd94bfaca7bd87c61434531ee1d443692bedf62d48845ab6d8bed005492c97a9" Feb 18 00:54:32 crc kubenswrapper[4858]: E0218 00:54:32.154595 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd94bfaca7bd87c61434531ee1d443692bedf62d48845ab6d8bed005492c97a9\": container with ID starting with fd94bfaca7bd87c61434531ee1d443692bedf62d48845ab6d8bed005492c97a9 not found: ID does not exist" containerID="fd94bfaca7bd87c61434531ee1d443692bedf62d48845ab6d8bed005492c97a9" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.154623 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd94bfaca7bd87c61434531ee1d443692bedf62d48845ab6d8bed005492c97a9"} err="failed to get container status \"fd94bfaca7bd87c61434531ee1d443692bedf62d48845ab6d8bed005492c97a9\": rpc error: code = NotFound desc = could not find container \"fd94bfaca7bd87c61434531ee1d443692bedf62d48845ab6d8bed005492c97a9\": container with ID starting with fd94bfaca7bd87c61434531ee1d443692bedf62d48845ab6d8bed005492c97a9 not found: ID does not exist" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.154641 4858 scope.go:117] "RemoveContainer" containerID="1f6b2284735f799f240e4d480cbcf152034af2f5fa8b634f38348028a5fc3972" Feb 18 00:54:32 crc kubenswrapper[4858]: E0218 00:54:32.154887 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f6b2284735f799f240e4d480cbcf152034af2f5fa8b634f38348028a5fc3972\": container with ID starting with 1f6b2284735f799f240e4d480cbcf152034af2f5fa8b634f38348028a5fc3972 not found: ID does not exist" containerID="1f6b2284735f799f240e4d480cbcf152034af2f5fa8b634f38348028a5fc3972" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.154919 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f6b2284735f799f240e4d480cbcf152034af2f5fa8b634f38348028a5fc3972"} err="failed to get container status \"1f6b2284735f799f240e4d480cbcf152034af2f5fa8b634f38348028a5fc3972\": rpc error: code = NotFound desc = could not find container \"1f6b2284735f799f240e4d480cbcf152034af2f5fa8b634f38348028a5fc3972\": container with ID starting with 1f6b2284735f799f240e4d480cbcf152034af2f5fa8b634f38348028a5fc3972 not found: ID does not exist" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.234520 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58abf118-bedd-4b18-a089-bf4ac9d06f44-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"58abf118-bedd-4b18-a089-bf4ac9d06f44\") " pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.234570 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\") pod \"glance-default-external-api-0\" (UID: \"58abf118-bedd-4b18-a089-bf4ac9d06f44\") " pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.234614 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58abf118-bedd-4b18-a089-bf4ac9d06f44-logs\") pod \"glance-default-external-api-0\" (UID: \"58abf118-bedd-4b18-a089-bf4ac9d06f44\") " pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.234635 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6694g\" (UniqueName: \"kubernetes.io/projected/58abf118-bedd-4b18-a089-bf4ac9d06f44-kube-api-access-6694g\") pod \"glance-default-external-api-0\" (UID: \"58abf118-bedd-4b18-a089-bf4ac9d06f44\") " pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.234671 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58abf118-bedd-4b18-a089-bf4ac9d06f44-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"58abf118-bedd-4b18-a089-bf4ac9d06f44\") " pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.234696 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/58abf118-bedd-4b18-a089-bf4ac9d06f44-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"58abf118-bedd-4b18-a089-bf4ac9d06f44\") " pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.234771 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58abf118-bedd-4b18-a089-bf4ac9d06f44-scripts\") pod \"glance-default-external-api-0\" (UID: \"58abf118-bedd-4b18-a089-bf4ac9d06f44\") " pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.234820 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58abf118-bedd-4b18-a089-bf4ac9d06f44-config-data\") pod \"glance-default-external-api-0\" (UID: \"58abf118-bedd-4b18-a089-bf4ac9d06f44\") " pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.336754 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58abf118-bedd-4b18-a089-bf4ac9d06f44-config-data\") pod \"glance-default-external-api-0\" (UID: \"58abf118-bedd-4b18-a089-bf4ac9d06f44\") " pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.336801 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58abf118-bedd-4b18-a089-bf4ac9d06f44-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"58abf118-bedd-4b18-a089-bf4ac9d06f44\") " pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.336826 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\") pod \"glance-default-external-api-0\" (UID: \"58abf118-bedd-4b18-a089-bf4ac9d06f44\") " pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.336866 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58abf118-bedd-4b18-a089-bf4ac9d06f44-logs\") pod \"glance-default-external-api-0\" (UID: \"58abf118-bedd-4b18-a089-bf4ac9d06f44\") " pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.336887 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6694g\" (UniqueName: \"kubernetes.io/projected/58abf118-bedd-4b18-a089-bf4ac9d06f44-kube-api-access-6694g\") pod \"glance-default-external-api-0\" (UID: \"58abf118-bedd-4b18-a089-bf4ac9d06f44\") " pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.336918 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58abf118-bedd-4b18-a089-bf4ac9d06f44-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"58abf118-bedd-4b18-a089-bf4ac9d06f44\") " pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.336941 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/58abf118-bedd-4b18-a089-bf4ac9d06f44-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"58abf118-bedd-4b18-a089-bf4ac9d06f44\") " pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.337019 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58abf118-bedd-4b18-a089-bf4ac9d06f44-scripts\") pod \"glance-default-external-api-0\" (UID: \"58abf118-bedd-4b18-a089-bf4ac9d06f44\") " pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.337969 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/58abf118-bedd-4b18-a089-bf4ac9d06f44-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"58abf118-bedd-4b18-a089-bf4ac9d06f44\") " pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.339207 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58abf118-bedd-4b18-a089-bf4ac9d06f44-logs\") pod \"glance-default-external-api-0\" (UID: \"58abf118-bedd-4b18-a089-bf4ac9d06f44\") " pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.348369 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58abf118-bedd-4b18-a089-bf4ac9d06f44-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"58abf118-bedd-4b18-a089-bf4ac9d06f44\") " pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.348853 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.348970 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\") pod \"glance-default-external-api-0\" (UID: \"58abf118-bedd-4b18-a089-bf4ac9d06f44\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/70d288833f8e05bd5ab355a71e03a1d850821b8fc6c525467c163add739f4167/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.348895 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58abf118-bedd-4b18-a089-bf4ac9d06f44-config-data\") pod \"glance-default-external-api-0\" (UID: \"58abf118-bedd-4b18-a089-bf4ac9d06f44\") " pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.350583 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58abf118-bedd-4b18-a089-bf4ac9d06f44-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"58abf118-bedd-4b18-a089-bf4ac9d06f44\") " pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.358020 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58abf118-bedd-4b18-a089-bf4ac9d06f44-scripts\") pod \"glance-default-external-api-0\" (UID: \"58abf118-bedd-4b18-a089-bf4ac9d06f44\") " pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.361699 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6694g\" (UniqueName: \"kubernetes.io/projected/58abf118-bedd-4b18-a089-bf4ac9d06f44-kube-api-access-6694g\") pod \"glance-default-external-api-0\" (UID: \"58abf118-bedd-4b18-a089-bf4ac9d06f44\") " pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.407935 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ced7117e-c471-49ff-8f11-c2333cc7f770\") pod \"glance-default-external-api-0\" (UID: \"58abf118-bedd-4b18-a089-bf4ac9d06f44\") " pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.450627 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.568211 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.655276 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/877dd3dc-e90c-4751-9650-17e13a905e75-scripts\") pod \"877dd3dc-e90c-4751-9650-17e13a905e75\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.656198 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/877dd3dc-e90c-4751-9650-17e13a905e75-internal-tls-certs\") pod \"877dd3dc-e90c-4751-9650-17e13a905e75\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.656325 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\") pod \"877dd3dc-e90c-4751-9650-17e13a905e75\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.656396 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2kd9\" (UniqueName: \"kubernetes.io/projected/877dd3dc-e90c-4751-9650-17e13a905e75-kube-api-access-w2kd9\") pod \"877dd3dc-e90c-4751-9650-17e13a905e75\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.656422 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/877dd3dc-e90c-4751-9650-17e13a905e75-combined-ca-bundle\") pod \"877dd3dc-e90c-4751-9650-17e13a905e75\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.656525 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/877dd3dc-e90c-4751-9650-17e13a905e75-httpd-run\") pod \"877dd3dc-e90c-4751-9650-17e13a905e75\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.656600 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/877dd3dc-e90c-4751-9650-17e13a905e75-logs\") pod \"877dd3dc-e90c-4751-9650-17e13a905e75\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.656616 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/877dd3dc-e90c-4751-9650-17e13a905e75-config-data\") pod \"877dd3dc-e90c-4751-9650-17e13a905e75\" (UID: \"877dd3dc-e90c-4751-9650-17e13a905e75\") " Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.660336 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/877dd3dc-e90c-4751-9650-17e13a905e75-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "877dd3dc-e90c-4751-9650-17e13a905e75" (UID: "877dd3dc-e90c-4751-9650-17e13a905e75"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.662290 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/877dd3dc-e90c-4751-9650-17e13a905e75-logs" (OuterVolumeSpecName: "logs") pod "877dd3dc-e90c-4751-9650-17e13a905e75" (UID: "877dd3dc-e90c-4751-9650-17e13a905e75"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.678257 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/877dd3dc-e90c-4751-9650-17e13a905e75-scripts" (OuterVolumeSpecName: "scripts") pod "877dd3dc-e90c-4751-9650-17e13a905e75" (UID: "877dd3dc-e90c-4751-9650-17e13a905e75"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.687074 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/877dd3dc-e90c-4751-9650-17e13a905e75-kube-api-access-w2kd9" (OuterVolumeSpecName: "kube-api-access-w2kd9") pod "877dd3dc-e90c-4751-9650-17e13a905e75" (UID: "877dd3dc-e90c-4751-9650-17e13a905e75"). InnerVolumeSpecName "kube-api-access-w2kd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.742733 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/877dd3dc-e90c-4751-9650-17e13a905e75-config-data" (OuterVolumeSpecName: "config-data") pod "877dd3dc-e90c-4751-9650-17e13a905e75" (UID: "877dd3dc-e90c-4751-9650-17e13a905e75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.748966 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/877dd3dc-e90c-4751-9650-17e13a905e75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "877dd3dc-e90c-4751-9650-17e13a905e75" (UID: "877dd3dc-e90c-4751-9650-17e13a905e75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.760085 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/877dd3dc-e90c-4751-9650-17e13a905e75-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.760116 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2kd9\" (UniqueName: \"kubernetes.io/projected/877dd3dc-e90c-4751-9650-17e13a905e75-kube-api-access-w2kd9\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.760126 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/877dd3dc-e90c-4751-9650-17e13a905e75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.760134 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/877dd3dc-e90c-4751-9650-17e13a905e75-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.760141 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/877dd3dc-e90c-4751-9650-17e13a905e75-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.760149 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/877dd3dc-e90c-4751-9650-17e13a905e75-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.857440 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/877dd3dc-e90c-4751-9650-17e13a905e75-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "877dd3dc-e90c-4751-9650-17e13a905e75" (UID: "877dd3dc-e90c-4751-9650-17e13a905e75"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.866867 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/877dd3dc-e90c-4751-9650-17e13a905e75-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.876696 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2" (OuterVolumeSpecName: "glance") pod "877dd3dc-e90c-4751-9650-17e13a905e75" (UID: "877dd3dc-e90c-4751-9650-17e13a905e75"). InnerVolumeSpecName "pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.969014 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\") on node \"crc\" " Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.993228 4858 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 00:54:32 crc kubenswrapper[4858]: I0218 00:54:32.993386 4858 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2") on node "crc" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.053918 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30527726-4c4d-455a-a73a-7e4e909e6f13","Type":"ContainerStarted","Data":"e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4"} Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.055991 4858 generic.go:334] "Generic (PLEG): container finished" podID="877dd3dc-e90c-4751-9650-17e13a905e75" containerID="6cd94bd78ba937632315375bbdd6808fd6c156139a11596f39886415b9f38a6f" exitCode=0 Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.056036 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"877dd3dc-e90c-4751-9650-17e13a905e75","Type":"ContainerDied","Data":"6cd94bd78ba937632315375bbdd6808fd6c156139a11596f39886415b9f38a6f"} Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.056054 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"877dd3dc-e90c-4751-9650-17e13a905e75","Type":"ContainerDied","Data":"b616633e11a4b9ec85b6721f90e7d5fe5f70c7cb7fa161ca85c52c2ec186e345"} Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.056070 4858 scope.go:117] "RemoveContainer" containerID="6cd94bd78ba937632315375bbdd6808fd6c156139a11596f39886415b9f38a6f" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.056192 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.079942 4858 reconciler_common.go:293] "Volume detached for volume \"pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.099662 4858 scope.go:117] "RemoveContainer" containerID="e50737d2634a0884d1f67434f82949f5adc239249cb256377b6b080f5b7742e8" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.102514 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.123912 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.125219 4858 scope.go:117] "RemoveContainer" containerID="6cd94bd78ba937632315375bbdd6808fd6c156139a11596f39886415b9f38a6f" Feb 18 00:54:33 crc kubenswrapper[4858]: E0218 00:54:33.125696 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cd94bd78ba937632315375bbdd6808fd6c156139a11596f39886415b9f38a6f\": container with ID starting with 6cd94bd78ba937632315375bbdd6808fd6c156139a11596f39886415b9f38a6f not found: ID does not exist" containerID="6cd94bd78ba937632315375bbdd6808fd6c156139a11596f39886415b9f38a6f" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.125726 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd94bd78ba937632315375bbdd6808fd6c156139a11596f39886415b9f38a6f"} err="failed to get container status \"6cd94bd78ba937632315375bbdd6808fd6c156139a11596f39886415b9f38a6f\": rpc error: code = NotFound desc = could not find container \"6cd94bd78ba937632315375bbdd6808fd6c156139a11596f39886415b9f38a6f\": container with ID starting with 6cd94bd78ba937632315375bbdd6808fd6c156139a11596f39886415b9f38a6f not found: ID does not exist" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.125748 4858 scope.go:117] "RemoveContainer" containerID="e50737d2634a0884d1f67434f82949f5adc239249cb256377b6b080f5b7742e8" Feb 18 00:54:33 crc kubenswrapper[4858]: E0218 00:54:33.129819 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e50737d2634a0884d1f67434f82949f5adc239249cb256377b6b080f5b7742e8\": container with ID starting with e50737d2634a0884d1f67434f82949f5adc239249cb256377b6b080f5b7742e8 not found: ID does not exist" containerID="e50737d2634a0884d1f67434f82949f5adc239249cb256377b6b080f5b7742e8" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.129868 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e50737d2634a0884d1f67434f82949f5adc239249cb256377b6b080f5b7742e8"} err="failed to get container status \"e50737d2634a0884d1f67434f82949f5adc239249cb256377b6b080f5b7742e8\": rpc error: code = NotFound desc = could not find container \"e50737d2634a0884d1f67434f82949f5adc239249cb256377b6b080f5b7742e8\": container with ID starting with e50737d2634a0884d1f67434f82949f5adc239249cb256377b6b080f5b7742e8 not found: ID does not exist" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.146029 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 00:54:33 crc kubenswrapper[4858]: E0218 00:54:33.146520 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="877dd3dc-e90c-4751-9650-17e13a905e75" containerName="glance-httpd" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.146538 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="877dd3dc-e90c-4751-9650-17e13a905e75" containerName="glance-httpd" Feb 18 00:54:33 crc kubenswrapper[4858]: E0218 00:54:33.146548 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="877dd3dc-e90c-4751-9650-17e13a905e75" containerName="glance-log" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.146554 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="877dd3dc-e90c-4751-9650-17e13a905e75" containerName="glance-log" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.146722 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="877dd3dc-e90c-4751-9650-17e13a905e75" containerName="glance-log" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.146740 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="877dd3dc-e90c-4751-9650-17e13a905e75" containerName="glance-httpd" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.147822 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.153419 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.153675 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.155373 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.163007 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.285394 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.285443 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.285478 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hlwn\" (UniqueName: \"kubernetes.io/projected/d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d-kube-api-access-7hlwn\") pod \"glance-default-internal-api-0\" (UID: \"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.285680 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\") pod \"glance-default-internal-api-0\" (UID: \"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.285735 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d-logs\") pod \"glance-default-internal-api-0\" (UID: \"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.285789 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.285824 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.285940 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.387518 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.387587 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.387792 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.387827 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.387853 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hlwn\" (UniqueName: \"kubernetes.io/projected/d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d-kube-api-access-7hlwn\") pod \"glance-default-internal-api-0\" (UID: \"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.387887 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\") pod \"glance-default-internal-api-0\" (UID: \"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.387916 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d-logs\") pod \"glance-default-internal-api-0\" (UID: \"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.387944 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.388010 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.389306 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d-logs\") pod \"glance-default-internal-api-0\" (UID: \"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.392781 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.393299 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.394460 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.395747 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.395774 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\") pod \"glance-default-internal-api-0\" (UID: \"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5803e2a900e6e36c291a83a7f7817a6f6801a9c863eb8ea67b62b877ff35bd26/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.405003 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hlwn\" (UniqueName: \"kubernetes.io/projected/d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d-kube-api-access-7hlwn\") pod \"glance-default-internal-api-0\" (UID: \"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.406127 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.440515 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="263681ae-36ff-4a39-8e4c-1971633851ee" path="/var/lib/kubelet/pods/263681ae-36ff-4a39-8e4c-1971633851ee/volumes" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.441519 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="877dd3dc-e90c-4751-9650-17e13a905e75" path="/var/lib/kubelet/pods/877dd3dc-e90c-4751-9650-17e13a905e75/volumes" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.495201 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dfac2842-0347-4e51-bc19-0ab31a3d8ac2\") pod \"glance-default-internal-api-0\" (UID: \"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d\") " pod="openstack/glance-default-internal-api-0" Feb 18 00:54:33 crc kubenswrapper[4858]: I0218 00:54:33.767572 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 00:54:34 crc kubenswrapper[4858]: I0218 00:54:34.078832 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"58abf118-bedd-4b18-a089-bf4ac9d06f44","Type":"ContainerStarted","Data":"4b0ac168b14e885129ad6c0797530bf2de83e666a5bdf7045e1f11dc7e9a5c9d"} Feb 18 00:54:34 crc kubenswrapper[4858]: I0218 00:54:34.079185 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"58abf118-bedd-4b18-a089-bf4ac9d06f44","Type":"ContainerStarted","Data":"cca6d909c2745ac833de52a12ac801148919fca584f2991299f311b5fac6e600"} Feb 18 00:54:34 crc kubenswrapper[4858]: I0218 00:54:34.098274 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30527726-4c4d-455a-a73a-7e4e909e6f13","Type":"ContainerStarted","Data":"789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b"} Feb 18 00:54:34 crc kubenswrapper[4858]: I0218 00:54:34.335903 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 00:54:35 crc kubenswrapper[4858]: I0218 00:54:35.125370 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d","Type":"ContainerStarted","Data":"150897b305690259420a1a174210ad7ac2aac00af0155e25a2edf9c80cc59838"} Feb 18 00:54:35 crc kubenswrapper[4858]: I0218 00:54:35.125848 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d","Type":"ContainerStarted","Data":"a37058e765b2fcfae9866151d7e091fe66666ed56a201ae45f605f7522e2d050"} Feb 18 00:54:35 crc kubenswrapper[4858]: I0218 00:54:35.140866 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30527726-4c4d-455a-a73a-7e4e909e6f13","Type":"ContainerStarted","Data":"40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92"} Feb 18 00:54:35 crc kubenswrapper[4858]: I0218 00:54:35.144053 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"58abf118-bedd-4b18-a089-bf4ac9d06f44","Type":"ContainerStarted","Data":"1f841087036c8263a93a932d3d1227921571b8ec82064696c10176b4b6dcce2f"} Feb 18 00:54:35 crc kubenswrapper[4858]: I0218 00:54:35.177174 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.177154294 podStartE2EDuration="3.177154294s" podCreationTimestamp="2026-02-18 00:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:54:35.166588258 +0000 UTC m=+1228.472424990" watchObservedRunningTime="2026-02-18 00:54:35.177154294 +0000 UTC m=+1228.482991026" Feb 18 00:54:36 crc kubenswrapper[4858]: I0218 00:54:36.154293 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d","Type":"ContainerStarted","Data":"c953267d8a7aa480ab3d73cec3c56034f5eca5f69121921552f9acf5af7d896c"} Feb 18 00:54:36 crc kubenswrapper[4858]: I0218 00:54:36.174390 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.174371871 podStartE2EDuration="3.174371871s" podCreationTimestamp="2026-02-18 00:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:54:36.171199404 +0000 UTC m=+1229.477036136" watchObservedRunningTime="2026-02-18 00:54:36.174371871 +0000 UTC m=+1229.480208593" Feb 18 00:54:36 crc kubenswrapper[4858]: I0218 00:54:36.471924 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:54:37 crc kubenswrapper[4858]: I0218 00:54:37.164879 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30527726-4c4d-455a-a73a-7e4e909e6f13","Type":"ContainerStarted","Data":"c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154"} Feb 18 00:54:37 crc kubenswrapper[4858]: I0218 00:54:37.512213 4858 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podfceda99c-5b24-470d-a686-fea2bb92d258"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podfceda99c-5b24-470d-a686-fea2bb92d258] : Timed out while waiting for systemd to remove kubepods-besteffort-podfceda99c_5b24_470d_a686_fea2bb92d258.slice" Feb 18 00:54:38 crc kubenswrapper[4858]: I0218 00:54:38.172079 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 00:54:38 crc kubenswrapper[4858]: I0218 00:54:38.172074 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30527726-4c4d-455a-a73a-7e4e909e6f13" containerName="ceilometer-central-agent" containerID="cri-o://e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4" gracePeriod=30 Feb 18 00:54:38 crc kubenswrapper[4858]: I0218 00:54:38.172124 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30527726-4c4d-455a-a73a-7e4e909e6f13" containerName="proxy-httpd" containerID="cri-o://c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154" gracePeriod=30 Feb 18 00:54:38 crc kubenswrapper[4858]: I0218 00:54:38.172139 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30527726-4c4d-455a-a73a-7e4e909e6f13" containerName="sg-core" containerID="cri-o://40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92" gracePeriod=30 Feb 18 00:54:38 crc kubenswrapper[4858]: I0218 00:54:38.172151 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30527726-4c4d-455a-a73a-7e4e909e6f13" containerName="ceilometer-notification-agent" containerID="cri-o://789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b" gracePeriod=30 Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.035819 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.108993 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30527726-4c4d-455a-a73a-7e4e909e6f13-combined-ca-bundle\") pod \"30527726-4c4d-455a-a73a-7e4e909e6f13\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.109799 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30527726-4c4d-455a-a73a-7e4e909e6f13-config-data\") pod \"30527726-4c4d-455a-a73a-7e4e909e6f13\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.109948 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30527726-4c4d-455a-a73a-7e4e909e6f13-scripts\") pod \"30527726-4c4d-455a-a73a-7e4e909e6f13\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.109983 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klvjj\" (UniqueName: \"kubernetes.io/projected/30527726-4c4d-455a-a73a-7e4e909e6f13-kube-api-access-klvjj\") pod \"30527726-4c4d-455a-a73a-7e4e909e6f13\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.110001 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30527726-4c4d-455a-a73a-7e4e909e6f13-run-httpd\") pod \"30527726-4c4d-455a-a73a-7e4e909e6f13\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.110033 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30527726-4c4d-455a-a73a-7e4e909e6f13-sg-core-conf-yaml\") pod \"30527726-4c4d-455a-a73a-7e4e909e6f13\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.110089 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30527726-4c4d-455a-a73a-7e4e909e6f13-log-httpd\") pod \"30527726-4c4d-455a-a73a-7e4e909e6f13\" (UID: \"30527726-4c4d-455a-a73a-7e4e909e6f13\") " Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.110798 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30527726-4c4d-455a-a73a-7e4e909e6f13-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "30527726-4c4d-455a-a73a-7e4e909e6f13" (UID: "30527726-4c4d-455a-a73a-7e4e909e6f13"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.111020 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30527726-4c4d-455a-a73a-7e4e909e6f13-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "30527726-4c4d-455a-a73a-7e4e909e6f13" (UID: "30527726-4c4d-455a-a73a-7e4e909e6f13"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.117801 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30527726-4c4d-455a-a73a-7e4e909e6f13-scripts" (OuterVolumeSpecName: "scripts") pod "30527726-4c4d-455a-a73a-7e4e909e6f13" (UID: "30527726-4c4d-455a-a73a-7e4e909e6f13"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.117872 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30527726-4c4d-455a-a73a-7e4e909e6f13-kube-api-access-klvjj" (OuterVolumeSpecName: "kube-api-access-klvjj") pod "30527726-4c4d-455a-a73a-7e4e909e6f13" (UID: "30527726-4c4d-455a-a73a-7e4e909e6f13"). InnerVolumeSpecName "kube-api-access-klvjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.160191 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30527726-4c4d-455a-a73a-7e4e909e6f13-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "30527726-4c4d-455a-a73a-7e4e909e6f13" (UID: "30527726-4c4d-455a-a73a-7e4e909e6f13"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.186327 4858 generic.go:334] "Generic (PLEG): container finished" podID="30527726-4c4d-455a-a73a-7e4e909e6f13" containerID="c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154" exitCode=0 Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.186358 4858 generic.go:334] "Generic (PLEG): container finished" podID="30527726-4c4d-455a-a73a-7e4e909e6f13" containerID="40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92" exitCode=2 Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.186365 4858 generic.go:334] "Generic (PLEG): container finished" podID="30527726-4c4d-455a-a73a-7e4e909e6f13" containerID="789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b" exitCode=0 Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.186372 4858 generic.go:334] "Generic (PLEG): container finished" podID="30527726-4c4d-455a-a73a-7e4e909e6f13" containerID="e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4" exitCode=0 Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.186390 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30527726-4c4d-455a-a73a-7e4e909e6f13","Type":"ContainerDied","Data":"c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154"} Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.186414 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30527726-4c4d-455a-a73a-7e4e909e6f13","Type":"ContainerDied","Data":"40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92"} Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.186423 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30527726-4c4d-455a-a73a-7e4e909e6f13","Type":"ContainerDied","Data":"789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b"} Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.186432 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30527726-4c4d-455a-a73a-7e4e909e6f13","Type":"ContainerDied","Data":"e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4"} Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.186440 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30527726-4c4d-455a-a73a-7e4e909e6f13","Type":"ContainerDied","Data":"49612f417922945a5911aa18d8a298e4bb92b9d72b326147f8579f195413c86b"} Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.186455 4858 scope.go:117] "RemoveContainer" containerID="c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.186589 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.212798 4858 scope.go:117] "RemoveContainer" containerID="40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.213368 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klvjj\" (UniqueName: \"kubernetes.io/projected/30527726-4c4d-455a-a73a-7e4e909e6f13-kube-api-access-klvjj\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.213394 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30527726-4c4d-455a-a73a-7e4e909e6f13-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.213404 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30527726-4c4d-455a-a73a-7e4e909e6f13-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.213411 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30527726-4c4d-455a-a73a-7e4e909e6f13-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.213421 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30527726-4c4d-455a-a73a-7e4e909e6f13-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.214692 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30527726-4c4d-455a-a73a-7e4e909e6f13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30527726-4c4d-455a-a73a-7e4e909e6f13" (UID: "30527726-4c4d-455a-a73a-7e4e909e6f13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.236844 4858 scope.go:117] "RemoveContainer" containerID="789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.253437 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30527726-4c4d-455a-a73a-7e4e909e6f13-config-data" (OuterVolumeSpecName: "config-data") pod "30527726-4c4d-455a-a73a-7e4e909e6f13" (UID: "30527726-4c4d-455a-a73a-7e4e909e6f13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.259885 4858 scope.go:117] "RemoveContainer" containerID="e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.285086 4858 scope.go:117] "RemoveContainer" containerID="c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154" Feb 18 00:54:39 crc kubenswrapper[4858]: E0218 00:54:39.285644 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154\": container with ID starting with c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154 not found: ID does not exist" containerID="c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.285675 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154"} err="failed to get container status \"c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154\": rpc error: code = NotFound desc = could not find container \"c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154\": container with ID starting with c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154 not found: ID does not exist" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.285700 4858 scope.go:117] "RemoveContainer" containerID="40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92" Feb 18 00:54:39 crc kubenswrapper[4858]: E0218 00:54:39.285943 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92\": container with ID starting with 40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92 not found: ID does not exist" containerID="40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.285981 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92"} err="failed to get container status \"40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92\": rpc error: code = NotFound desc = could not find container \"40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92\": container with ID starting with 40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92 not found: ID does not exist" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.285999 4858 scope.go:117] "RemoveContainer" containerID="789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b" Feb 18 00:54:39 crc kubenswrapper[4858]: E0218 00:54:39.286231 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b\": container with ID starting with 789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b not found: ID does not exist" containerID="789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.286259 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b"} err="failed to get container status \"789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b\": rpc error: code = NotFound desc = could not find container \"789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b\": container with ID starting with 789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b not found: ID does not exist" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.286278 4858 scope.go:117] "RemoveContainer" containerID="e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4" Feb 18 00:54:39 crc kubenswrapper[4858]: E0218 00:54:39.286668 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4\": container with ID starting with e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4 not found: ID does not exist" containerID="e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.286723 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4"} err="failed to get container status \"e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4\": rpc error: code = NotFound desc = could not find container \"e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4\": container with ID starting with e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4 not found: ID does not exist" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.286761 4858 scope.go:117] "RemoveContainer" containerID="c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.287023 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154"} err="failed to get container status \"c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154\": rpc error: code = NotFound desc = could not find container \"c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154\": container with ID starting with c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154 not found: ID does not exist" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.287046 4858 scope.go:117] "RemoveContainer" containerID="40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.287374 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92"} err="failed to get container status \"40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92\": rpc error: code = NotFound desc = could not find container \"40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92\": container with ID starting with 40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92 not found: ID does not exist" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.287410 4858 scope.go:117] "RemoveContainer" containerID="789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.287658 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b"} err="failed to get container status \"789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b\": rpc error: code = NotFound desc = could not find container \"789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b\": container with ID starting with 789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b not found: ID does not exist" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.287679 4858 scope.go:117] "RemoveContainer" containerID="e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.288091 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4"} err="failed to get container status \"e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4\": rpc error: code = NotFound desc = could not find container \"e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4\": container with ID starting with e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4 not found: ID does not exist" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.288109 4858 scope.go:117] "RemoveContainer" containerID="c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.288387 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154"} err="failed to get container status \"c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154\": rpc error: code = NotFound desc = could not find container \"c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154\": container with ID starting with c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154 not found: ID does not exist" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.288408 4858 scope.go:117] "RemoveContainer" containerID="40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.288747 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92"} err="failed to get container status \"40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92\": rpc error: code = NotFound desc = could not find container \"40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92\": container with ID starting with 40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92 not found: ID does not exist" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.288764 4858 scope.go:117] "RemoveContainer" containerID="789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.289004 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b"} err="failed to get container status \"789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b\": rpc error: code = NotFound desc = could not find container \"789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b\": container with ID starting with 789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b not found: ID does not exist" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.289024 4858 scope.go:117] "RemoveContainer" containerID="e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.289365 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4"} err="failed to get container status \"e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4\": rpc error: code = NotFound desc = could not find container \"e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4\": container with ID starting with e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4 not found: ID does not exist" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.289382 4858 scope.go:117] "RemoveContainer" containerID="c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.289628 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154"} err="failed to get container status \"c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154\": rpc error: code = NotFound desc = could not find container \"c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154\": container with ID starting with c28aa8e1a3a8bb97fad01a1592ad1766b174130c82005ff57d3f6a2db9ae1154 not found: ID does not exist" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.289647 4858 scope.go:117] "RemoveContainer" containerID="40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.290047 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92"} err="failed to get container status \"40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92\": rpc error: code = NotFound desc = could not find container \"40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92\": container with ID starting with 40ac8d37cbfe1e8e1d3f847b2dfd686c924d820df642c5e8d06000ece03a9a92 not found: ID does not exist" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.290063 4858 scope.go:117] "RemoveContainer" containerID="789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.290283 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b"} err="failed to get container status \"789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b\": rpc error: code = NotFound desc = could not find container \"789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b\": container with ID starting with 789ac1e83ec9c8aaf7a2b88dc3180be968f96d8ccd3e2a845dc0c1c97f2eb32b not found: ID does not exist" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.290311 4858 scope.go:117] "RemoveContainer" containerID="e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.290625 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4"} err="failed to get container status \"e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4\": rpc error: code = NotFound desc = could not find container \"e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4\": container with ID starting with e36a6a0161507f3df30bebe6185439837184ac3972f7bc910737cc9444c375f4 not found: ID does not exist" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.314989 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30527726-4c4d-455a-a73a-7e4e909e6f13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.315028 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30527726-4c4d-455a-a73a-7e4e909e6f13-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.437385 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.508119 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.522433 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.543612 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:54:39 crc kubenswrapper[4858]: E0218 00:54:39.544226 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30527726-4c4d-455a-a73a-7e4e909e6f13" containerName="sg-core" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.544305 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="30527726-4c4d-455a-a73a-7e4e909e6f13" containerName="sg-core" Feb 18 00:54:39 crc kubenswrapper[4858]: E0218 00:54:39.544387 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30527726-4c4d-455a-a73a-7e4e909e6f13" containerName="proxy-httpd" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.544444 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="30527726-4c4d-455a-a73a-7e4e909e6f13" containerName="proxy-httpd" Feb 18 00:54:39 crc kubenswrapper[4858]: E0218 00:54:39.544518 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30527726-4c4d-455a-a73a-7e4e909e6f13" containerName="ceilometer-notification-agent" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.544579 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="30527726-4c4d-455a-a73a-7e4e909e6f13" containerName="ceilometer-notification-agent" Feb 18 00:54:39 crc kubenswrapper[4858]: E0218 00:54:39.544641 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30527726-4c4d-455a-a73a-7e4e909e6f13" containerName="ceilometer-central-agent" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.544694 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="30527726-4c4d-455a-a73a-7e4e909e6f13" containerName="ceilometer-central-agent" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.544917 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="30527726-4c4d-455a-a73a-7e4e909e6f13" containerName="sg-core" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.544981 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="30527726-4c4d-455a-a73a-7e4e909e6f13" containerName="ceilometer-notification-agent" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.545058 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="30527726-4c4d-455a-a73a-7e4e909e6f13" containerName="ceilometer-central-agent" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.545114 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="30527726-4c4d-455a-a73a-7e4e909e6f13" containerName="proxy-httpd" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.546884 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.551384 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.551485 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.553227 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.621141 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3a688a8-79c1-419d-8ad9-01fb945592c8-run-httpd\") pod \"ceilometer-0\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " pod="openstack/ceilometer-0" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.621213 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3a688a8-79c1-419d-8ad9-01fb945592c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " pod="openstack/ceilometer-0" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.621813 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3a688a8-79c1-419d-8ad9-01fb945592c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " pod="openstack/ceilometer-0" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.621922 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3a688a8-79c1-419d-8ad9-01fb945592c8-config-data\") pod \"ceilometer-0\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " pod="openstack/ceilometer-0" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.621976 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cc86\" (UniqueName: \"kubernetes.io/projected/b3a688a8-79c1-419d-8ad9-01fb945592c8-kube-api-access-6cc86\") pod \"ceilometer-0\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " pod="openstack/ceilometer-0" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.622091 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3a688a8-79c1-419d-8ad9-01fb945592c8-log-httpd\") pod \"ceilometer-0\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " pod="openstack/ceilometer-0" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.622154 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3a688a8-79c1-419d-8ad9-01fb945592c8-scripts\") pod \"ceilometer-0\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " pod="openstack/ceilometer-0" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.724042 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3a688a8-79c1-419d-8ad9-01fb945592c8-run-httpd\") pod \"ceilometer-0\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " pod="openstack/ceilometer-0" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.724107 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3a688a8-79c1-419d-8ad9-01fb945592c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " pod="openstack/ceilometer-0" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.724171 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3a688a8-79c1-419d-8ad9-01fb945592c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " pod="openstack/ceilometer-0" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.724197 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3a688a8-79c1-419d-8ad9-01fb945592c8-config-data\") pod \"ceilometer-0\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " pod="openstack/ceilometer-0" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.724217 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cc86\" (UniqueName: \"kubernetes.io/projected/b3a688a8-79c1-419d-8ad9-01fb945592c8-kube-api-access-6cc86\") pod \"ceilometer-0\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " pod="openstack/ceilometer-0" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.724255 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3a688a8-79c1-419d-8ad9-01fb945592c8-log-httpd\") pod \"ceilometer-0\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " pod="openstack/ceilometer-0" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.724275 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3a688a8-79c1-419d-8ad9-01fb945592c8-scripts\") pod \"ceilometer-0\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " pod="openstack/ceilometer-0" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.724586 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3a688a8-79c1-419d-8ad9-01fb945592c8-run-httpd\") pod \"ceilometer-0\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " pod="openstack/ceilometer-0" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.725445 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3a688a8-79c1-419d-8ad9-01fb945592c8-log-httpd\") pod \"ceilometer-0\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " pod="openstack/ceilometer-0" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.728447 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3a688a8-79c1-419d-8ad9-01fb945592c8-scripts\") pod \"ceilometer-0\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " pod="openstack/ceilometer-0" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.729994 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3a688a8-79c1-419d-8ad9-01fb945592c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " pod="openstack/ceilometer-0" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.731207 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3a688a8-79c1-419d-8ad9-01fb945592c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " pod="openstack/ceilometer-0" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.740027 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3a688a8-79c1-419d-8ad9-01fb945592c8-config-data\") pod \"ceilometer-0\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " pod="openstack/ceilometer-0" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.747062 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cc86\" (UniqueName: \"kubernetes.io/projected/b3a688a8-79c1-419d-8ad9-01fb945592c8-kube-api-access-6cc86\") pod \"ceilometer-0\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " pod="openstack/ceilometer-0" Feb 18 00:54:39 crc kubenswrapper[4858]: I0218 00:54:39.867825 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:54:40 crc kubenswrapper[4858]: I0218 00:54:40.389357 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:54:40 crc kubenswrapper[4858]: I0218 00:54:40.972590 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:54:41 crc kubenswrapper[4858]: I0218 00:54:41.206483 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3a688a8-79c1-419d-8ad9-01fb945592c8","Type":"ContainerStarted","Data":"67f296961dd65634f7d629b404e5ae8bbdbcf20b99218058de46d97f7a000bb0"} Feb 18 00:54:41 crc kubenswrapper[4858]: I0218 00:54:41.463934 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30527726-4c4d-455a-a73a-7e4e909e6f13" path="/var/lib/kubelet/pods/30527726-4c4d-455a-a73a-7e4e909e6f13/volumes" Feb 18 00:54:42 crc kubenswrapper[4858]: I0218 00:54:42.216659 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3a688a8-79c1-419d-8ad9-01fb945592c8","Type":"ContainerStarted","Data":"ebfe4bab3cf848822f40298ede5cf93e9f6d1975b220e3d4c9c608a4d9c11641"} Feb 18 00:54:42 crc kubenswrapper[4858]: I0218 00:54:42.452326 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 00:54:42 crc kubenswrapper[4858]: I0218 00:54:42.452396 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 00:54:42 crc kubenswrapper[4858]: I0218 00:54:42.486370 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 00:54:42 crc kubenswrapper[4858]: I0218 00:54:42.493958 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 00:54:43 crc kubenswrapper[4858]: I0218 00:54:43.227878 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3a688a8-79c1-419d-8ad9-01fb945592c8","Type":"ContainerStarted","Data":"f2420274b5a92663325bb18fb1c9b5d04c1c829bec2b6dac4a701044d6462ad4"} Feb 18 00:54:43 crc kubenswrapper[4858]: I0218 00:54:43.228151 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3a688a8-79c1-419d-8ad9-01fb945592c8","Type":"ContainerStarted","Data":"21a409eb88177783cc0eaaae901fe2a5bea20a27fdc927c183cdd0dcef505b59"} Feb 18 00:54:43 crc kubenswrapper[4858]: I0218 00:54:43.228169 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 00:54:43 crc kubenswrapper[4858]: I0218 00:54:43.228184 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 00:54:43 crc kubenswrapper[4858]: I0218 00:54:43.768783 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 00:54:43 crc kubenswrapper[4858]: I0218 00:54:43.768821 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 00:54:43 crc kubenswrapper[4858]: I0218 00:54:43.823581 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 00:54:43 crc kubenswrapper[4858]: I0218 00:54:43.824093 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 00:54:44 crc kubenswrapper[4858]: I0218 00:54:44.236530 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 00:54:44 crc kubenswrapper[4858]: I0218 00:54:44.236798 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 00:54:44 crc kubenswrapper[4858]: I0218 00:54:44.244577 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Feb 18 00:54:45 crc kubenswrapper[4858]: I0218 00:54:45.245682 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3a688a8-79c1-419d-8ad9-01fb945592c8","Type":"ContainerStarted","Data":"81ae5e6f8daf461e4dbcc8712e74dc34631b4ad1b13c35caaf6cf9f459182e91"} Feb 18 00:54:45 crc kubenswrapper[4858]: I0218 00:54:45.245925 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3a688a8-79c1-419d-8ad9-01fb945592c8" containerName="ceilometer-central-agent" containerID="cri-o://ebfe4bab3cf848822f40298ede5cf93e9f6d1975b220e3d4c9c608a4d9c11641" gracePeriod=30 Feb 18 00:54:45 crc kubenswrapper[4858]: I0218 00:54:45.246037 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3a688a8-79c1-419d-8ad9-01fb945592c8" containerName="ceilometer-notification-agent" containerID="cri-o://21a409eb88177783cc0eaaae901fe2a5bea20a27fdc927c183cdd0dcef505b59" gracePeriod=30 Feb 18 00:54:45 crc kubenswrapper[4858]: I0218 00:54:45.245969 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3a688a8-79c1-419d-8ad9-01fb945592c8" containerName="proxy-httpd" containerID="cri-o://81ae5e6f8daf461e4dbcc8712e74dc34631b4ad1b13c35caaf6cf9f459182e91" gracePeriod=30 Feb 18 00:54:45 crc kubenswrapper[4858]: I0218 00:54:45.246157 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3a688a8-79c1-419d-8ad9-01fb945592c8" containerName="sg-core" containerID="cri-o://f2420274b5a92663325bb18fb1c9b5d04c1c829bec2b6dac4a701044d6462ad4" gracePeriod=30 Feb 18 00:54:45 crc kubenswrapper[4858]: I0218 00:54:45.277246 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.6337416019999997 podStartE2EDuration="6.277225755s" podCreationTimestamp="2026-02-18 00:54:39 +0000 UTC" firstStartedPulling="2026-02-18 00:54:40.410521569 +0000 UTC m=+1233.716358311" lastFinishedPulling="2026-02-18 00:54:44.054005732 +0000 UTC m=+1237.359842464" observedRunningTime="2026-02-18 00:54:45.268439441 +0000 UTC m=+1238.574276173" watchObservedRunningTime="2026-02-18 00:54:45.277225755 +0000 UTC m=+1238.583062487" Feb 18 00:54:45 crc kubenswrapper[4858]: I0218 00:54:45.445454 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 00:54:45 crc kubenswrapper[4858]: I0218 00:54:45.446065 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 00:54:45 crc kubenswrapper[4858]: I0218 00:54:45.648068 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 00:54:46 crc kubenswrapper[4858]: I0218 00:54:46.258669 4858 generic.go:334] "Generic (PLEG): container finished" podID="b3a688a8-79c1-419d-8ad9-01fb945592c8" containerID="81ae5e6f8daf461e4dbcc8712e74dc34631b4ad1b13c35caaf6cf9f459182e91" exitCode=0 Feb 18 00:54:46 crc kubenswrapper[4858]: I0218 00:54:46.258697 4858 generic.go:334] "Generic (PLEG): container finished" podID="b3a688a8-79c1-419d-8ad9-01fb945592c8" containerID="f2420274b5a92663325bb18fb1c9b5d04c1c829bec2b6dac4a701044d6462ad4" exitCode=2 Feb 18 00:54:46 crc kubenswrapper[4858]: I0218 00:54:46.258705 4858 generic.go:334] "Generic (PLEG): container finished" podID="b3a688a8-79c1-419d-8ad9-01fb945592c8" containerID="21a409eb88177783cc0eaaae901fe2a5bea20a27fdc927c183cdd0dcef505b59" exitCode=0 Feb 18 00:54:46 crc kubenswrapper[4858]: I0218 00:54:46.258748 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3a688a8-79c1-419d-8ad9-01fb945592c8","Type":"ContainerDied","Data":"81ae5e6f8daf461e4dbcc8712e74dc34631b4ad1b13c35caaf6cf9f459182e91"} Feb 18 00:54:46 crc kubenswrapper[4858]: I0218 00:54:46.258793 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3a688a8-79c1-419d-8ad9-01fb945592c8","Type":"ContainerDied","Data":"f2420274b5a92663325bb18fb1c9b5d04c1c829bec2b6dac4a701044d6462ad4"} Feb 18 00:54:46 crc kubenswrapper[4858]: I0218 00:54:46.258805 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3a688a8-79c1-419d-8ad9-01fb945592c8","Type":"ContainerDied","Data":"21a409eb88177783cc0eaaae901fe2a5bea20a27fdc927c183cdd0dcef505b59"} Feb 18 00:54:46 crc kubenswrapper[4858]: I0218 00:54:46.258769 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 00:54:46 crc kubenswrapper[4858]: I0218 00:54:46.258830 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 00:54:46 crc kubenswrapper[4858]: I0218 00:54:46.425292 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 00:54:46 crc kubenswrapper[4858]: I0218 00:54:46.430994 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.197930 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-bwphg"] Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.200749 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bwphg" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.208539 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-bwphg"] Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.287697 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-6hn7r"] Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.289374 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6hn7r" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.299402 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnqq5\" (UniqueName: \"kubernetes.io/projected/12869268-4147-4557-bcaf-c027d1478c29-kube-api-access-rnqq5\") pod \"nova-cell0-db-create-6hn7r\" (UID: \"12869268-4147-4557-bcaf-c027d1478c29\") " pod="openstack/nova-cell0-db-create-6hn7r" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.299582 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f482994d-5817-4411-861c-b9634b40bf88-operator-scripts\") pod \"nova-api-db-create-bwphg\" (UID: \"f482994d-5817-4411-861c-b9634b40bf88\") " pod="openstack/nova-api-db-create-bwphg" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.299667 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12869268-4147-4557-bcaf-c027d1478c29-operator-scripts\") pod \"nova-cell0-db-create-6hn7r\" (UID: \"12869268-4147-4557-bcaf-c027d1478c29\") " pod="openstack/nova-cell0-db-create-6hn7r" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.299794 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjfl4\" (UniqueName: \"kubernetes.io/projected/f482994d-5817-4411-861c-b9634b40bf88-kube-api-access-fjfl4\") pod \"nova-api-db-create-bwphg\" (UID: \"f482994d-5817-4411-861c-b9634b40bf88\") " pod="openstack/nova-api-db-create-bwphg" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.300014 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6hn7r"] Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.392309 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-gbzjd"] Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.393851 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gbzjd" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.400988 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12869268-4147-4557-bcaf-c027d1478c29-operator-scripts\") pod \"nova-cell0-db-create-6hn7r\" (UID: \"12869268-4147-4557-bcaf-c027d1478c29\") " pod="openstack/nova-cell0-db-create-6hn7r" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.401040 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjfl4\" (UniqueName: \"kubernetes.io/projected/f482994d-5817-4411-861c-b9634b40bf88-kube-api-access-fjfl4\") pod \"nova-api-db-create-bwphg\" (UID: \"f482994d-5817-4411-861c-b9634b40bf88\") " pod="openstack/nova-api-db-create-bwphg" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.401122 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnqq5\" (UniqueName: \"kubernetes.io/projected/12869268-4147-4557-bcaf-c027d1478c29-kube-api-access-rnqq5\") pod \"nova-cell0-db-create-6hn7r\" (UID: \"12869268-4147-4557-bcaf-c027d1478c29\") " pod="openstack/nova-cell0-db-create-6hn7r" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.401170 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f482994d-5817-4411-861c-b9634b40bf88-operator-scripts\") pod \"nova-api-db-create-bwphg\" (UID: \"f482994d-5817-4411-861c-b9634b40bf88\") " pod="openstack/nova-api-db-create-bwphg" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.401862 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f482994d-5817-4411-861c-b9634b40bf88-operator-scripts\") pod \"nova-api-db-create-bwphg\" (UID: \"f482994d-5817-4411-861c-b9634b40bf88\") " pod="openstack/nova-api-db-create-bwphg" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.401918 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12869268-4147-4557-bcaf-c027d1478c29-operator-scripts\") pod \"nova-cell0-db-create-6hn7r\" (UID: \"12869268-4147-4557-bcaf-c027d1478c29\") " pod="openstack/nova-cell0-db-create-6hn7r" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.408270 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-c0da-account-create-update-s7czg"] Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.409517 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c0da-account-create-update-s7czg" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.413863 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.438926 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnqq5\" (UniqueName: \"kubernetes.io/projected/12869268-4147-4557-bcaf-c027d1478c29-kube-api-access-rnqq5\") pod \"nova-cell0-db-create-6hn7r\" (UID: \"12869268-4147-4557-bcaf-c027d1478c29\") " pod="openstack/nova-cell0-db-create-6hn7r" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.455360 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjfl4\" (UniqueName: \"kubernetes.io/projected/f482994d-5817-4411-861c-b9634b40bf88-kube-api-access-fjfl4\") pod \"nova-api-db-create-bwphg\" (UID: \"f482994d-5817-4411-861c-b9634b40bf88\") " pod="openstack/nova-api-db-create-bwphg" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.502882 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-gbzjd"] Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.504473 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9641f46c-7437-4828-aa73-a35c3c49c06f-operator-scripts\") pod \"nova-cell1-db-create-gbzjd\" (UID: \"9641f46c-7437-4828-aa73-a35c3c49c06f\") " pod="openstack/nova-cell1-db-create-gbzjd" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.504699 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt8q2\" (UniqueName: \"kubernetes.io/projected/9641f46c-7437-4828-aa73-a35c3c49c06f-kube-api-access-lt8q2\") pod \"nova-cell1-db-create-gbzjd\" (UID: \"9641f46c-7437-4828-aa73-a35c3c49c06f\") " pod="openstack/nova-cell1-db-create-gbzjd" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.505953 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c0da-account-create-update-s7czg"] Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.529116 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bwphg" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.594866 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-001d-account-create-update-nb7dp"] Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.596463 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-001d-account-create-update-nb7dp" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.598698 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.607182 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9641f46c-7437-4828-aa73-a35c3c49c06f-operator-scripts\") pod \"nova-cell1-db-create-gbzjd\" (UID: \"9641f46c-7437-4828-aa73-a35c3c49c06f\") " pod="openstack/nova-cell1-db-create-gbzjd" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.607381 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt8q2\" (UniqueName: \"kubernetes.io/projected/9641f46c-7437-4828-aa73-a35c3c49c06f-kube-api-access-lt8q2\") pod \"nova-cell1-db-create-gbzjd\" (UID: \"9641f46c-7437-4828-aa73-a35c3c49c06f\") " pod="openstack/nova-cell1-db-create-gbzjd" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.607534 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csljt\" (UniqueName: \"kubernetes.io/projected/25e5d349-2a21-4825-921a-f391f079db96-kube-api-access-csljt\") pod \"nova-api-c0da-account-create-update-s7czg\" (UID: \"25e5d349-2a21-4825-921a-f391f079db96\") " pod="openstack/nova-api-c0da-account-create-update-s7czg" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.607604 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25e5d349-2a21-4825-921a-f391f079db96-operator-scripts\") pod \"nova-api-c0da-account-create-update-s7czg\" (UID: \"25e5d349-2a21-4825-921a-f391f079db96\") " pod="openstack/nova-api-c0da-account-create-update-s7czg" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.609166 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9641f46c-7437-4828-aa73-a35c3c49c06f-operator-scripts\") pod \"nova-cell1-db-create-gbzjd\" (UID: \"9641f46c-7437-4828-aa73-a35c3c49c06f\") " pod="openstack/nova-cell1-db-create-gbzjd" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.612331 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6hn7r" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.621152 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-001d-account-create-update-nb7dp"] Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.627205 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt8q2\" (UniqueName: \"kubernetes.io/projected/9641f46c-7437-4828-aa73-a35c3c49c06f-kube-api-access-lt8q2\") pod \"nova-cell1-db-create-gbzjd\" (UID: \"9641f46c-7437-4828-aa73-a35c3c49c06f\") " pod="openstack/nova-cell1-db-create-gbzjd" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.715631 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gbzjd" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.716867 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fxqd\" (UniqueName: \"kubernetes.io/projected/07f92966-13bb-4fa6-b5d6-388baaf16288-kube-api-access-2fxqd\") pod \"nova-cell0-001d-account-create-update-nb7dp\" (UID: \"07f92966-13bb-4fa6-b5d6-388baaf16288\") " pod="openstack/nova-cell0-001d-account-create-update-nb7dp" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.716941 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07f92966-13bb-4fa6-b5d6-388baaf16288-operator-scripts\") pod \"nova-cell0-001d-account-create-update-nb7dp\" (UID: \"07f92966-13bb-4fa6-b5d6-388baaf16288\") " pod="openstack/nova-cell0-001d-account-create-update-nb7dp" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.716997 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csljt\" (UniqueName: \"kubernetes.io/projected/25e5d349-2a21-4825-921a-f391f079db96-kube-api-access-csljt\") pod \"nova-api-c0da-account-create-update-s7czg\" (UID: \"25e5d349-2a21-4825-921a-f391f079db96\") " pod="openstack/nova-api-c0da-account-create-update-s7czg" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.717029 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25e5d349-2a21-4825-921a-f391f079db96-operator-scripts\") pod \"nova-api-c0da-account-create-update-s7czg\" (UID: \"25e5d349-2a21-4825-921a-f391f079db96\") " pod="openstack/nova-api-c0da-account-create-update-s7czg" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.717887 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25e5d349-2a21-4825-921a-f391f079db96-operator-scripts\") pod \"nova-api-c0da-account-create-update-s7czg\" (UID: \"25e5d349-2a21-4825-921a-f391f079db96\") " pod="openstack/nova-api-c0da-account-create-update-s7czg" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.738139 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csljt\" (UniqueName: \"kubernetes.io/projected/25e5d349-2a21-4825-921a-f391f079db96-kube-api-access-csljt\") pod \"nova-api-c0da-account-create-update-s7czg\" (UID: \"25e5d349-2a21-4825-921a-f391f079db96\") " pod="openstack/nova-api-c0da-account-create-update-s7czg" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.806432 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-a270-account-create-update-96tgv"] Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.808014 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a270-account-create-update-96tgv" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.815087 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.816945 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-a270-account-create-update-96tgv"] Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.820891 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fxqd\" (UniqueName: \"kubernetes.io/projected/07f92966-13bb-4fa6-b5d6-388baaf16288-kube-api-access-2fxqd\") pod \"nova-cell0-001d-account-create-update-nb7dp\" (UID: \"07f92966-13bb-4fa6-b5d6-388baaf16288\") " pod="openstack/nova-cell0-001d-account-create-update-nb7dp" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.820935 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07f92966-13bb-4fa6-b5d6-388baaf16288-operator-scripts\") pod \"nova-cell0-001d-account-create-update-nb7dp\" (UID: \"07f92966-13bb-4fa6-b5d6-388baaf16288\") " pod="openstack/nova-cell0-001d-account-create-update-nb7dp" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.820999 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afbd10d3-a140-407f-b44d-52a42e8dec44-operator-scripts\") pod \"nova-cell1-a270-account-create-update-96tgv\" (UID: \"afbd10d3-a140-407f-b44d-52a42e8dec44\") " pod="openstack/nova-cell1-a270-account-create-update-96tgv" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.821038 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8vc9\" (UniqueName: \"kubernetes.io/projected/afbd10d3-a140-407f-b44d-52a42e8dec44-kube-api-access-m8vc9\") pod \"nova-cell1-a270-account-create-update-96tgv\" (UID: \"afbd10d3-a140-407f-b44d-52a42e8dec44\") " pod="openstack/nova-cell1-a270-account-create-update-96tgv" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.822070 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07f92966-13bb-4fa6-b5d6-388baaf16288-operator-scripts\") pod \"nova-cell0-001d-account-create-update-nb7dp\" (UID: \"07f92966-13bb-4fa6-b5d6-388baaf16288\") " pod="openstack/nova-cell0-001d-account-create-update-nb7dp" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.822556 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c0da-account-create-update-s7czg" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.849013 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fxqd\" (UniqueName: \"kubernetes.io/projected/07f92966-13bb-4fa6-b5d6-388baaf16288-kube-api-access-2fxqd\") pod \"nova-cell0-001d-account-create-update-nb7dp\" (UID: \"07f92966-13bb-4fa6-b5d6-388baaf16288\") " pod="openstack/nova-cell0-001d-account-create-update-nb7dp" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.922608 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afbd10d3-a140-407f-b44d-52a42e8dec44-operator-scripts\") pod \"nova-cell1-a270-account-create-update-96tgv\" (UID: \"afbd10d3-a140-407f-b44d-52a42e8dec44\") " pod="openstack/nova-cell1-a270-account-create-update-96tgv" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.922661 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8vc9\" (UniqueName: \"kubernetes.io/projected/afbd10d3-a140-407f-b44d-52a42e8dec44-kube-api-access-m8vc9\") pod \"nova-cell1-a270-account-create-update-96tgv\" (UID: \"afbd10d3-a140-407f-b44d-52a42e8dec44\") " pod="openstack/nova-cell1-a270-account-create-update-96tgv" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.923665 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afbd10d3-a140-407f-b44d-52a42e8dec44-operator-scripts\") pod \"nova-cell1-a270-account-create-update-96tgv\" (UID: \"afbd10d3-a140-407f-b44d-52a42e8dec44\") " pod="openstack/nova-cell1-a270-account-create-update-96tgv" Feb 18 00:54:51 crc kubenswrapper[4858]: I0218 00:54:51.949660 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8vc9\" (UniqueName: \"kubernetes.io/projected/afbd10d3-a140-407f-b44d-52a42e8dec44-kube-api-access-m8vc9\") pod \"nova-cell1-a270-account-create-update-96tgv\" (UID: \"afbd10d3-a140-407f-b44d-52a42e8dec44\") " pod="openstack/nova-cell1-a270-account-create-update-96tgv" Feb 18 00:54:52 crc kubenswrapper[4858]: I0218 00:54:52.035518 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-001d-account-create-update-nb7dp" Feb 18 00:54:52 crc kubenswrapper[4858]: I0218 00:54:52.096466 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-bwphg"] Feb 18 00:54:52 crc kubenswrapper[4858]: I0218 00:54:52.137862 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a270-account-create-update-96tgv" Feb 18 00:54:52 crc kubenswrapper[4858]: W0218 00:54:52.158258 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf482994d_5817_4411_861c_b9634b40bf88.slice/crio-e334c515a3071775fefa5833d5baeebb3a146cdcc3b6c1cd16e6b105948f71f4 WatchSource:0}: Error finding container e334c515a3071775fefa5833d5baeebb3a146cdcc3b6c1cd16e6b105948f71f4: Status 404 returned error can't find the container with id e334c515a3071775fefa5833d5baeebb3a146cdcc3b6c1cd16e6b105948f71f4 Feb 18 00:54:52 crc kubenswrapper[4858]: I0218 00:54:52.208400 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6hn7r"] Feb 18 00:54:52 crc kubenswrapper[4858]: I0218 00:54:52.317709 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6hn7r" event={"ID":"12869268-4147-4557-bcaf-c027d1478c29","Type":"ContainerStarted","Data":"01640f47b296d3923e9dd042f675a4ab982ced31c095e73becb54d1bdc551e71"} Feb 18 00:54:52 crc kubenswrapper[4858]: I0218 00:54:52.319901 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bwphg" event={"ID":"f482994d-5817-4411-861c-b9634b40bf88","Type":"ContainerStarted","Data":"e334c515a3071775fefa5833d5baeebb3a146cdcc3b6c1cd16e6b105948f71f4"} Feb 18 00:54:52 crc kubenswrapper[4858]: I0218 00:54:52.328235 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-gbzjd"] Feb 18 00:54:52 crc kubenswrapper[4858]: W0218 00:54:52.339786 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9641f46c_7437_4828_aa73_a35c3c49c06f.slice/crio-0d2cf10b39161bdd6b7adc7eec639ce20116d3e62049773241b9a94a38aaebee WatchSource:0}: Error finding container 0d2cf10b39161bdd6b7adc7eec639ce20116d3e62049773241b9a94a38aaebee: Status 404 returned error can't find the container with id 0d2cf10b39161bdd6b7adc7eec639ce20116d3e62049773241b9a94a38aaebee Feb 18 00:54:52 crc kubenswrapper[4858]: I0218 00:54:52.427775 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c0da-account-create-update-s7czg"] Feb 18 00:54:52 crc kubenswrapper[4858]: I0218 00:54:52.599127 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-a270-account-create-update-96tgv"] Feb 18 00:54:52 crc kubenswrapper[4858]: I0218 00:54:52.658914 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-001d-account-create-update-nb7dp"] Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.330644 4858 generic.go:334] "Generic (PLEG): container finished" podID="07f92966-13bb-4fa6-b5d6-388baaf16288" containerID="ba9bd69572b54cefe3d5575f3479af10a80b9f15d10ea10d02c75f385c9c4c2e" exitCode=0 Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.330684 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-001d-account-create-update-nb7dp" event={"ID":"07f92966-13bb-4fa6-b5d6-388baaf16288","Type":"ContainerDied","Data":"ba9bd69572b54cefe3d5575f3479af10a80b9f15d10ea10d02c75f385c9c4c2e"} Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.330733 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-001d-account-create-update-nb7dp" event={"ID":"07f92966-13bb-4fa6-b5d6-388baaf16288","Type":"ContainerStarted","Data":"02edc37c68c7dda4835436e1ed9be2f96e25ffe88cd05dfd2929e04ebadc1be4"} Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.332366 4858 generic.go:334] "Generic (PLEG): container finished" podID="afbd10d3-a140-407f-b44d-52a42e8dec44" containerID="b9d424d283417ed8611b52e8f476cf01a72f2dda2a1f95cc3d94a3214875d11d" exitCode=0 Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.332443 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a270-account-create-update-96tgv" event={"ID":"afbd10d3-a140-407f-b44d-52a42e8dec44","Type":"ContainerDied","Data":"b9d424d283417ed8611b52e8f476cf01a72f2dda2a1f95cc3d94a3214875d11d"} Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.332467 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a270-account-create-update-96tgv" event={"ID":"afbd10d3-a140-407f-b44d-52a42e8dec44","Type":"ContainerStarted","Data":"6106f6c0ea564b8f266f7082cd8f8af675a2d3ffb6cef5ff83f67a8fc6affc7d"} Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.334702 4858 generic.go:334] "Generic (PLEG): container finished" podID="f482994d-5817-4411-861c-b9634b40bf88" containerID="9fe3a961b055e6ca858f5152bdb53c66edf9e03a9cb23eecb3a98b5fc95d1097" exitCode=0 Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.334749 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bwphg" event={"ID":"f482994d-5817-4411-861c-b9634b40bf88","Type":"ContainerDied","Data":"9fe3a961b055e6ca858f5152bdb53c66edf9e03a9cb23eecb3a98b5fc95d1097"} Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.337476 4858 generic.go:334] "Generic (PLEG): container finished" podID="25e5d349-2a21-4825-921a-f391f079db96" containerID="db67aca5adefd30832957b7dd1582244533d4b38a35de7f43d8fcc7fa48486e4" exitCode=0 Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.337567 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c0da-account-create-update-s7czg" event={"ID":"25e5d349-2a21-4825-921a-f391f079db96","Type":"ContainerDied","Data":"db67aca5adefd30832957b7dd1582244533d4b38a35de7f43d8fcc7fa48486e4"} Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.337592 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c0da-account-create-update-s7czg" event={"ID":"25e5d349-2a21-4825-921a-f391f079db96","Type":"ContainerStarted","Data":"18db78b4c37ae20d9991f50f7b217f1d94bb4d73c70c6422fe20d76e40d294de"} Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.339545 4858 generic.go:334] "Generic (PLEG): container finished" podID="9641f46c-7437-4828-aa73-a35c3c49c06f" containerID="f0744d27366509bcfb677df37dab469eeee5d9304b2e2ab77bb239c8569e404b" exitCode=0 Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.339622 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-gbzjd" event={"ID":"9641f46c-7437-4828-aa73-a35c3c49c06f","Type":"ContainerDied","Data":"f0744d27366509bcfb677df37dab469eeee5d9304b2e2ab77bb239c8569e404b"} Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.339648 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-gbzjd" event={"ID":"9641f46c-7437-4828-aa73-a35c3c49c06f","Type":"ContainerStarted","Data":"0d2cf10b39161bdd6b7adc7eec639ce20116d3e62049773241b9a94a38aaebee"} Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.341005 4858 generic.go:334] "Generic (PLEG): container finished" podID="12869268-4147-4557-bcaf-c027d1478c29" containerID="2f9ecffb0aa4715879c10e46d9d7cb6852814799b8c44e2643390ac4567d7430" exitCode=0 Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.341052 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6hn7r" event={"ID":"12869268-4147-4557-bcaf-c027d1478c29","Type":"ContainerDied","Data":"2f9ecffb0aa4715879c10e46d9d7cb6852814799b8c44e2643390ac4567d7430"} Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.904624 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.973159 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3a688a8-79c1-419d-8ad9-01fb945592c8-combined-ca-bundle\") pod \"b3a688a8-79c1-419d-8ad9-01fb945592c8\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.973254 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cc86\" (UniqueName: \"kubernetes.io/projected/b3a688a8-79c1-419d-8ad9-01fb945592c8-kube-api-access-6cc86\") pod \"b3a688a8-79c1-419d-8ad9-01fb945592c8\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.973290 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3a688a8-79c1-419d-8ad9-01fb945592c8-run-httpd\") pod \"b3a688a8-79c1-419d-8ad9-01fb945592c8\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.973373 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3a688a8-79c1-419d-8ad9-01fb945592c8-scripts\") pod \"b3a688a8-79c1-419d-8ad9-01fb945592c8\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.973465 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3a688a8-79c1-419d-8ad9-01fb945592c8-sg-core-conf-yaml\") pod \"b3a688a8-79c1-419d-8ad9-01fb945592c8\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.973518 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3a688a8-79c1-419d-8ad9-01fb945592c8-config-data\") pod \"b3a688a8-79c1-419d-8ad9-01fb945592c8\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.973560 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3a688a8-79c1-419d-8ad9-01fb945592c8-log-httpd\") pod \"b3a688a8-79c1-419d-8ad9-01fb945592c8\" (UID: \"b3a688a8-79c1-419d-8ad9-01fb945592c8\") " Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.973840 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3a688a8-79c1-419d-8ad9-01fb945592c8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b3a688a8-79c1-419d-8ad9-01fb945592c8" (UID: "b3a688a8-79c1-419d-8ad9-01fb945592c8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.974205 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3a688a8-79c1-419d-8ad9-01fb945592c8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b3a688a8-79c1-419d-8ad9-01fb945592c8" (UID: "b3a688a8-79c1-419d-8ad9-01fb945592c8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.974397 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3a688a8-79c1-419d-8ad9-01fb945592c8-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.974415 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3a688a8-79c1-419d-8ad9-01fb945592c8-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.980981 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3a688a8-79c1-419d-8ad9-01fb945592c8-kube-api-access-6cc86" (OuterVolumeSpecName: "kube-api-access-6cc86") pod "b3a688a8-79c1-419d-8ad9-01fb945592c8" (UID: "b3a688a8-79c1-419d-8ad9-01fb945592c8"). InnerVolumeSpecName "kube-api-access-6cc86". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:54:53 crc kubenswrapper[4858]: I0218 00:54:53.987631 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3a688a8-79c1-419d-8ad9-01fb945592c8-scripts" (OuterVolumeSpecName: "scripts") pod "b3a688a8-79c1-419d-8ad9-01fb945592c8" (UID: "b3a688a8-79c1-419d-8ad9-01fb945592c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.003789 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3a688a8-79c1-419d-8ad9-01fb945592c8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b3a688a8-79c1-419d-8ad9-01fb945592c8" (UID: "b3a688a8-79c1-419d-8ad9-01fb945592c8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.061451 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3a688a8-79c1-419d-8ad9-01fb945592c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3a688a8-79c1-419d-8ad9-01fb945592c8" (UID: "b3a688a8-79c1-419d-8ad9-01fb945592c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.075575 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3a688a8-79c1-419d-8ad9-01fb945592c8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.075608 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3a688a8-79c1-419d-8ad9-01fb945592c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.075622 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cc86\" (UniqueName: \"kubernetes.io/projected/b3a688a8-79c1-419d-8ad9-01fb945592c8-kube-api-access-6cc86\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.075636 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3a688a8-79c1-419d-8ad9-01fb945592c8-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.088347 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3a688a8-79c1-419d-8ad9-01fb945592c8-config-data" (OuterVolumeSpecName: "config-data") pod "b3a688a8-79c1-419d-8ad9-01fb945592c8" (UID: "b3a688a8-79c1-419d-8ad9-01fb945592c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.177024 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3a688a8-79c1-419d-8ad9-01fb945592c8-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.356797 4858 generic.go:334] "Generic (PLEG): container finished" podID="b3a688a8-79c1-419d-8ad9-01fb945592c8" containerID="ebfe4bab3cf848822f40298ede5cf93e9f6d1975b220e3d4c9c608a4d9c11641" exitCode=0 Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.356927 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.356999 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3a688a8-79c1-419d-8ad9-01fb945592c8","Type":"ContainerDied","Data":"ebfe4bab3cf848822f40298ede5cf93e9f6d1975b220e3d4c9c608a4d9c11641"} Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.357042 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3a688a8-79c1-419d-8ad9-01fb945592c8","Type":"ContainerDied","Data":"67f296961dd65634f7d629b404e5ae8bbdbcf20b99218058de46d97f7a000bb0"} Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.357070 4858 scope.go:117] "RemoveContainer" containerID="81ae5e6f8daf461e4dbcc8712e74dc34631b4ad1b13c35caaf6cf9f459182e91" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.459388 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.470930 4858 scope.go:117] "RemoveContainer" containerID="f2420274b5a92663325bb18fb1c9b5d04c1c829bec2b6dac4a701044d6462ad4" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.476586 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.492728 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:54:54 crc kubenswrapper[4858]: E0218 00:54:54.493186 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3a688a8-79c1-419d-8ad9-01fb945592c8" containerName="sg-core" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.493199 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3a688a8-79c1-419d-8ad9-01fb945592c8" containerName="sg-core" Feb 18 00:54:54 crc kubenswrapper[4858]: E0218 00:54:54.493232 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3a688a8-79c1-419d-8ad9-01fb945592c8" containerName="ceilometer-notification-agent" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.493238 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3a688a8-79c1-419d-8ad9-01fb945592c8" containerName="ceilometer-notification-agent" Feb 18 00:54:54 crc kubenswrapper[4858]: E0218 00:54:54.493254 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3a688a8-79c1-419d-8ad9-01fb945592c8" containerName="ceilometer-central-agent" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.493261 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3a688a8-79c1-419d-8ad9-01fb945592c8" containerName="ceilometer-central-agent" Feb 18 00:54:54 crc kubenswrapper[4858]: E0218 00:54:54.493270 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3a688a8-79c1-419d-8ad9-01fb945592c8" containerName="proxy-httpd" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.493275 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3a688a8-79c1-419d-8ad9-01fb945592c8" containerName="proxy-httpd" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.493516 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3a688a8-79c1-419d-8ad9-01fb945592c8" containerName="ceilometer-central-agent" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.493537 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3a688a8-79c1-419d-8ad9-01fb945592c8" containerName="sg-core" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.493547 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3a688a8-79c1-419d-8ad9-01fb945592c8" containerName="proxy-httpd" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.493556 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3a688a8-79c1-419d-8ad9-01fb945592c8" containerName="ceilometer-notification-agent" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.495706 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.499330 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.504354 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.505800 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.529667 4858 scope.go:117] "RemoveContainer" containerID="21a409eb88177783cc0eaaae901fe2a5bea20a27fdc927c183cdd0dcef505b59" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.589772 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24884\" (UniqueName: \"kubernetes.io/projected/933ab558-a1b3-4850-881b-54a3a69d7320-kube-api-access-24884\") pod \"ceilometer-0\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " pod="openstack/ceilometer-0" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.589853 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/933ab558-a1b3-4850-881b-54a3a69d7320-config-data\") pod \"ceilometer-0\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " pod="openstack/ceilometer-0" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.589887 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/933ab558-a1b3-4850-881b-54a3a69d7320-scripts\") pod \"ceilometer-0\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " pod="openstack/ceilometer-0" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.589904 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/933ab558-a1b3-4850-881b-54a3a69d7320-log-httpd\") pod \"ceilometer-0\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " pod="openstack/ceilometer-0" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.589989 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/933ab558-a1b3-4850-881b-54a3a69d7320-run-httpd\") pod \"ceilometer-0\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " pod="openstack/ceilometer-0" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.590054 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/933ab558-a1b3-4850-881b-54a3a69d7320-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " pod="openstack/ceilometer-0" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.590336 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/933ab558-a1b3-4850-881b-54a3a69d7320-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " pod="openstack/ceilometer-0" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.591393 4858 scope.go:117] "RemoveContainer" containerID="ebfe4bab3cf848822f40298ede5cf93e9f6d1975b220e3d4c9c608a4d9c11641" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.613473 4858 scope.go:117] "RemoveContainer" containerID="81ae5e6f8daf461e4dbcc8712e74dc34631b4ad1b13c35caaf6cf9f459182e91" Feb 18 00:54:54 crc kubenswrapper[4858]: E0218 00:54:54.614083 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81ae5e6f8daf461e4dbcc8712e74dc34631b4ad1b13c35caaf6cf9f459182e91\": container with ID starting with 81ae5e6f8daf461e4dbcc8712e74dc34631b4ad1b13c35caaf6cf9f459182e91 not found: ID does not exist" containerID="81ae5e6f8daf461e4dbcc8712e74dc34631b4ad1b13c35caaf6cf9f459182e91" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.614137 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81ae5e6f8daf461e4dbcc8712e74dc34631b4ad1b13c35caaf6cf9f459182e91"} err="failed to get container status \"81ae5e6f8daf461e4dbcc8712e74dc34631b4ad1b13c35caaf6cf9f459182e91\": rpc error: code = NotFound desc = could not find container \"81ae5e6f8daf461e4dbcc8712e74dc34631b4ad1b13c35caaf6cf9f459182e91\": container with ID starting with 81ae5e6f8daf461e4dbcc8712e74dc34631b4ad1b13c35caaf6cf9f459182e91 not found: ID does not exist" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.614171 4858 scope.go:117] "RemoveContainer" containerID="f2420274b5a92663325bb18fb1c9b5d04c1c829bec2b6dac4a701044d6462ad4" Feb 18 00:54:54 crc kubenswrapper[4858]: E0218 00:54:54.614538 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2420274b5a92663325bb18fb1c9b5d04c1c829bec2b6dac4a701044d6462ad4\": container with ID starting with f2420274b5a92663325bb18fb1c9b5d04c1c829bec2b6dac4a701044d6462ad4 not found: ID does not exist" containerID="f2420274b5a92663325bb18fb1c9b5d04c1c829bec2b6dac4a701044d6462ad4" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.614576 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2420274b5a92663325bb18fb1c9b5d04c1c829bec2b6dac4a701044d6462ad4"} err="failed to get container status \"f2420274b5a92663325bb18fb1c9b5d04c1c829bec2b6dac4a701044d6462ad4\": rpc error: code = NotFound desc = could not find container \"f2420274b5a92663325bb18fb1c9b5d04c1c829bec2b6dac4a701044d6462ad4\": container with ID starting with f2420274b5a92663325bb18fb1c9b5d04c1c829bec2b6dac4a701044d6462ad4 not found: ID does not exist" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.614602 4858 scope.go:117] "RemoveContainer" containerID="21a409eb88177783cc0eaaae901fe2a5bea20a27fdc927c183cdd0dcef505b59" Feb 18 00:54:54 crc kubenswrapper[4858]: E0218 00:54:54.614885 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21a409eb88177783cc0eaaae901fe2a5bea20a27fdc927c183cdd0dcef505b59\": container with ID starting with 21a409eb88177783cc0eaaae901fe2a5bea20a27fdc927c183cdd0dcef505b59 not found: ID does not exist" containerID="21a409eb88177783cc0eaaae901fe2a5bea20a27fdc927c183cdd0dcef505b59" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.614999 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21a409eb88177783cc0eaaae901fe2a5bea20a27fdc927c183cdd0dcef505b59"} err="failed to get container status \"21a409eb88177783cc0eaaae901fe2a5bea20a27fdc927c183cdd0dcef505b59\": rpc error: code = NotFound desc = could not find container \"21a409eb88177783cc0eaaae901fe2a5bea20a27fdc927c183cdd0dcef505b59\": container with ID starting with 21a409eb88177783cc0eaaae901fe2a5bea20a27fdc927c183cdd0dcef505b59 not found: ID does not exist" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.615073 4858 scope.go:117] "RemoveContainer" containerID="ebfe4bab3cf848822f40298ede5cf93e9f6d1975b220e3d4c9c608a4d9c11641" Feb 18 00:54:54 crc kubenswrapper[4858]: E0218 00:54:54.615399 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebfe4bab3cf848822f40298ede5cf93e9f6d1975b220e3d4c9c608a4d9c11641\": container with ID starting with ebfe4bab3cf848822f40298ede5cf93e9f6d1975b220e3d4c9c608a4d9c11641 not found: ID does not exist" containerID="ebfe4bab3cf848822f40298ede5cf93e9f6d1975b220e3d4c9c608a4d9c11641" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.615462 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebfe4bab3cf848822f40298ede5cf93e9f6d1975b220e3d4c9c608a4d9c11641"} err="failed to get container status \"ebfe4bab3cf848822f40298ede5cf93e9f6d1975b220e3d4c9c608a4d9c11641\": rpc error: code = NotFound desc = could not find container \"ebfe4bab3cf848822f40298ede5cf93e9f6d1975b220e3d4c9c608a4d9c11641\": container with ID starting with ebfe4bab3cf848822f40298ede5cf93e9f6d1975b220e3d4c9c608a4d9c11641 not found: ID does not exist" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.692518 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/933ab558-a1b3-4850-881b-54a3a69d7320-run-httpd\") pod \"ceilometer-0\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " pod="openstack/ceilometer-0" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.692799 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/933ab558-a1b3-4850-881b-54a3a69d7320-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " pod="openstack/ceilometer-0" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.693026 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/933ab558-a1b3-4850-881b-54a3a69d7320-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " pod="openstack/ceilometer-0" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.693142 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24884\" (UniqueName: \"kubernetes.io/projected/933ab558-a1b3-4850-881b-54a3a69d7320-kube-api-access-24884\") pod \"ceilometer-0\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " pod="openstack/ceilometer-0" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.693277 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/933ab558-a1b3-4850-881b-54a3a69d7320-config-data\") pod \"ceilometer-0\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " pod="openstack/ceilometer-0" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.693391 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/933ab558-a1b3-4850-881b-54a3a69d7320-scripts\") pod \"ceilometer-0\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " pod="openstack/ceilometer-0" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.693596 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/933ab558-a1b3-4850-881b-54a3a69d7320-run-httpd\") pod \"ceilometer-0\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " pod="openstack/ceilometer-0" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.693488 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/933ab558-a1b3-4850-881b-54a3a69d7320-log-httpd\") pod \"ceilometer-0\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " pod="openstack/ceilometer-0" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.698396 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/933ab558-a1b3-4850-881b-54a3a69d7320-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " pod="openstack/ceilometer-0" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.708085 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/933ab558-a1b3-4850-881b-54a3a69d7320-config-data\") pod \"ceilometer-0\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " pod="openstack/ceilometer-0" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.709069 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/933ab558-a1b3-4850-881b-54a3a69d7320-log-httpd\") pod \"ceilometer-0\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " pod="openstack/ceilometer-0" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.710823 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24884\" (UniqueName: \"kubernetes.io/projected/933ab558-a1b3-4850-881b-54a3a69d7320-kube-api-access-24884\") pod \"ceilometer-0\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " pod="openstack/ceilometer-0" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.713382 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/933ab558-a1b3-4850-881b-54a3a69d7320-scripts\") pod \"ceilometer-0\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " pod="openstack/ceilometer-0" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.713576 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/933ab558-a1b3-4850-881b-54a3a69d7320-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " pod="openstack/ceilometer-0" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.835675 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:54:54 crc kubenswrapper[4858]: I0218 00:54:54.973286 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a270-account-create-update-96tgv" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.004402 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8vc9\" (UniqueName: \"kubernetes.io/projected/afbd10d3-a140-407f-b44d-52a42e8dec44-kube-api-access-m8vc9\") pod \"afbd10d3-a140-407f-b44d-52a42e8dec44\" (UID: \"afbd10d3-a140-407f-b44d-52a42e8dec44\") " Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.004516 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afbd10d3-a140-407f-b44d-52a42e8dec44-operator-scripts\") pod \"afbd10d3-a140-407f-b44d-52a42e8dec44\" (UID: \"afbd10d3-a140-407f-b44d-52a42e8dec44\") " Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.005708 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afbd10d3-a140-407f-b44d-52a42e8dec44-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "afbd10d3-a140-407f-b44d-52a42e8dec44" (UID: "afbd10d3-a140-407f-b44d-52a42e8dec44"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.009599 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afbd10d3-a140-407f-b44d-52a42e8dec44-kube-api-access-m8vc9" (OuterVolumeSpecName: "kube-api-access-m8vc9") pod "afbd10d3-a140-407f-b44d-52a42e8dec44" (UID: "afbd10d3-a140-407f-b44d-52a42e8dec44"). InnerVolumeSpecName "kube-api-access-m8vc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.107125 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8vc9\" (UniqueName: \"kubernetes.io/projected/afbd10d3-a140-407f-b44d-52a42e8dec44-kube-api-access-m8vc9\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.107387 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afbd10d3-a140-407f-b44d-52a42e8dec44-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.120879 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-001d-account-create-update-nb7dp" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.136958 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gbzjd" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.151334 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c0da-account-create-update-s7czg" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.162057 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bwphg" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.169687 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6hn7r" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.208472 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07f92966-13bb-4fa6-b5d6-388baaf16288-operator-scripts\") pod \"07f92966-13bb-4fa6-b5d6-388baaf16288\" (UID: \"07f92966-13bb-4fa6-b5d6-388baaf16288\") " Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.208590 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjfl4\" (UniqueName: \"kubernetes.io/projected/f482994d-5817-4411-861c-b9634b40bf88-kube-api-access-fjfl4\") pod \"f482994d-5817-4411-861c-b9634b40bf88\" (UID: \"f482994d-5817-4411-861c-b9634b40bf88\") " Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.208957 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07f92966-13bb-4fa6-b5d6-388baaf16288-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07f92966-13bb-4fa6-b5d6-388baaf16288" (UID: "07f92966-13bb-4fa6-b5d6-388baaf16288"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.209307 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f482994d-5817-4411-861c-b9634b40bf88-operator-scripts\") pod \"f482994d-5817-4411-861c-b9634b40bf88\" (UID: \"f482994d-5817-4411-861c-b9634b40bf88\") " Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.209737 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f482994d-5817-4411-861c-b9634b40bf88-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f482994d-5817-4411-861c-b9634b40bf88" (UID: "f482994d-5817-4411-861c-b9634b40bf88"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.209797 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt8q2\" (UniqueName: \"kubernetes.io/projected/9641f46c-7437-4828-aa73-a35c3c49c06f-kube-api-access-lt8q2\") pod \"9641f46c-7437-4828-aa73-a35c3c49c06f\" (UID: \"9641f46c-7437-4828-aa73-a35c3c49c06f\") " Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.209887 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12869268-4147-4557-bcaf-c027d1478c29-operator-scripts\") pod \"12869268-4147-4557-bcaf-c027d1478c29\" (UID: \"12869268-4147-4557-bcaf-c027d1478c29\") " Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.209936 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csljt\" (UniqueName: \"kubernetes.io/projected/25e5d349-2a21-4825-921a-f391f079db96-kube-api-access-csljt\") pod \"25e5d349-2a21-4825-921a-f391f079db96\" (UID: \"25e5d349-2a21-4825-921a-f391f079db96\") " Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.210414 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnqq5\" (UniqueName: \"kubernetes.io/projected/12869268-4147-4557-bcaf-c027d1478c29-kube-api-access-rnqq5\") pod \"12869268-4147-4557-bcaf-c027d1478c29\" (UID: \"12869268-4147-4557-bcaf-c027d1478c29\") " Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.210446 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9641f46c-7437-4828-aa73-a35c3c49c06f-operator-scripts\") pod \"9641f46c-7437-4828-aa73-a35c3c49c06f\" (UID: \"9641f46c-7437-4828-aa73-a35c3c49c06f\") " Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.210491 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fxqd\" (UniqueName: \"kubernetes.io/projected/07f92966-13bb-4fa6-b5d6-388baaf16288-kube-api-access-2fxqd\") pod \"07f92966-13bb-4fa6-b5d6-388baaf16288\" (UID: \"07f92966-13bb-4fa6-b5d6-388baaf16288\") " Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.210552 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25e5d349-2a21-4825-921a-f391f079db96-operator-scripts\") pod \"25e5d349-2a21-4825-921a-f391f079db96\" (UID: \"25e5d349-2a21-4825-921a-f391f079db96\") " Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.210811 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12869268-4147-4557-bcaf-c027d1478c29-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "12869268-4147-4557-bcaf-c027d1478c29" (UID: "12869268-4147-4557-bcaf-c027d1478c29"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.210899 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9641f46c-7437-4828-aa73-a35c3c49c06f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9641f46c-7437-4828-aa73-a35c3c49c06f" (UID: "9641f46c-7437-4828-aa73-a35c3c49c06f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.211100 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e5d349-2a21-4825-921a-f391f079db96-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "25e5d349-2a21-4825-921a-f391f079db96" (UID: "25e5d349-2a21-4825-921a-f391f079db96"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.211621 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9641f46c-7437-4828-aa73-a35c3c49c06f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.211638 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25e5d349-2a21-4825-921a-f391f079db96-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.211648 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07f92966-13bb-4fa6-b5d6-388baaf16288-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.211657 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f482994d-5817-4411-861c-b9634b40bf88-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.211666 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12869268-4147-4557-bcaf-c027d1478c29-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.217683 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e5d349-2a21-4825-921a-f391f079db96-kube-api-access-csljt" (OuterVolumeSpecName: "kube-api-access-csljt") pod "25e5d349-2a21-4825-921a-f391f079db96" (UID: "25e5d349-2a21-4825-921a-f391f079db96"). InnerVolumeSpecName "kube-api-access-csljt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.217745 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9641f46c-7437-4828-aa73-a35c3c49c06f-kube-api-access-lt8q2" (OuterVolumeSpecName: "kube-api-access-lt8q2") pod "9641f46c-7437-4828-aa73-a35c3c49c06f" (UID: "9641f46c-7437-4828-aa73-a35c3c49c06f"). InnerVolumeSpecName "kube-api-access-lt8q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.217790 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f482994d-5817-4411-861c-b9634b40bf88-kube-api-access-fjfl4" (OuterVolumeSpecName: "kube-api-access-fjfl4") pod "f482994d-5817-4411-861c-b9634b40bf88" (UID: "f482994d-5817-4411-861c-b9634b40bf88"). InnerVolumeSpecName "kube-api-access-fjfl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.217822 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12869268-4147-4557-bcaf-c027d1478c29-kube-api-access-rnqq5" (OuterVolumeSpecName: "kube-api-access-rnqq5") pod "12869268-4147-4557-bcaf-c027d1478c29" (UID: "12869268-4147-4557-bcaf-c027d1478c29"). InnerVolumeSpecName "kube-api-access-rnqq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.219169 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07f92966-13bb-4fa6-b5d6-388baaf16288-kube-api-access-2fxqd" (OuterVolumeSpecName: "kube-api-access-2fxqd") pod "07f92966-13bb-4fa6-b5d6-388baaf16288" (UID: "07f92966-13bb-4fa6-b5d6-388baaf16288"). InnerVolumeSpecName "kube-api-access-2fxqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.312841 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fxqd\" (UniqueName: \"kubernetes.io/projected/07f92966-13bb-4fa6-b5d6-388baaf16288-kube-api-access-2fxqd\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.312868 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjfl4\" (UniqueName: \"kubernetes.io/projected/f482994d-5817-4411-861c-b9634b40bf88-kube-api-access-fjfl4\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.312877 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt8q2\" (UniqueName: \"kubernetes.io/projected/9641f46c-7437-4828-aa73-a35c3c49c06f-kube-api-access-lt8q2\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.312885 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csljt\" (UniqueName: \"kubernetes.io/projected/25e5d349-2a21-4825-921a-f391f079db96-kube-api-access-csljt\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.312893 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnqq5\" (UniqueName: \"kubernetes.io/projected/12869268-4147-4557-bcaf-c027d1478c29-kube-api-access-rnqq5\") on node \"crc\" DevicePath \"\"" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.366383 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-001d-account-create-update-nb7dp" event={"ID":"07f92966-13bb-4fa6-b5d6-388baaf16288","Type":"ContainerDied","Data":"02edc37c68c7dda4835436e1ed9be2f96e25ffe88cd05dfd2929e04ebadc1be4"} Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.366431 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02edc37c68c7dda4835436e1ed9be2f96e25ffe88cd05dfd2929e04ebadc1be4" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.366478 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-001d-account-create-update-nb7dp" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.374894 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a270-account-create-update-96tgv" event={"ID":"afbd10d3-a140-407f-b44d-52a42e8dec44","Type":"ContainerDied","Data":"6106f6c0ea564b8f266f7082cd8f8af675a2d3ffb6cef5ff83f67a8fc6affc7d"} Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.374937 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6106f6c0ea564b8f266f7082cd8f8af675a2d3ffb6cef5ff83f67a8fc6affc7d" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.374965 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a270-account-create-update-96tgv" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.376771 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bwphg" event={"ID":"f482994d-5817-4411-861c-b9634b40bf88","Type":"ContainerDied","Data":"e334c515a3071775fefa5833d5baeebb3a146cdcc3b6c1cd16e6b105948f71f4"} Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.376822 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e334c515a3071775fefa5833d5baeebb3a146cdcc3b6c1cd16e6b105948f71f4" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.376795 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bwphg" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.378700 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c0da-account-create-update-s7czg" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.378713 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c0da-account-create-update-s7czg" event={"ID":"25e5d349-2a21-4825-921a-f391f079db96","Type":"ContainerDied","Data":"18db78b4c37ae20d9991f50f7b217f1d94bb4d73c70c6422fe20d76e40d294de"} Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.378751 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18db78b4c37ae20d9991f50f7b217f1d94bb4d73c70c6422fe20d76e40d294de" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.382672 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-gbzjd" event={"ID":"9641f46c-7437-4828-aa73-a35c3c49c06f","Type":"ContainerDied","Data":"0d2cf10b39161bdd6b7adc7eec639ce20116d3e62049773241b9a94a38aaebee"} Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.382700 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d2cf10b39161bdd6b7adc7eec639ce20116d3e62049773241b9a94a38aaebee" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.382757 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gbzjd" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.385151 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6hn7r" event={"ID":"12869268-4147-4557-bcaf-c027d1478c29","Type":"ContainerDied","Data":"01640f47b296d3923e9dd042f675a4ab982ced31c095e73becb54d1bdc551e71"} Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.385211 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01640f47b296d3923e9dd042f675a4ab982ced31c095e73becb54d1bdc551e71" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.385293 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6hn7r" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.433107 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3a688a8-79c1-419d-8ad9-01fb945592c8" path="/var/lib/kubelet/pods/b3a688a8-79c1-419d-8ad9-01fb945592c8/volumes" Feb 18 00:54:55 crc kubenswrapper[4858]: I0218 00:54:55.513203 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:54:55 crc kubenswrapper[4858]: W0218 00:54:55.513476 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod933ab558_a1b3_4850_881b_54a3a69d7320.slice/crio-41ae1df0fdb6ae7e5c28e2dde3cc15f262bca9394db82a2dbb05d25e9234c674 WatchSource:0}: Error finding container 41ae1df0fdb6ae7e5c28e2dde3cc15f262bca9394db82a2dbb05d25e9234c674: Status 404 returned error can't find the container with id 41ae1df0fdb6ae7e5c28e2dde3cc15f262bca9394db82a2dbb05d25e9234c674 Feb 18 00:54:56 crc kubenswrapper[4858]: I0218 00:54:56.397983 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"933ab558-a1b3-4850-881b-54a3a69d7320","Type":"ContainerStarted","Data":"cad9dfd7299e69b45e4a2be38b0d1b817d7f66fd205b1f209679563e106f6934"} Feb 18 00:54:56 crc kubenswrapper[4858]: I0218 00:54:56.398222 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"933ab558-a1b3-4850-881b-54a3a69d7320","Type":"ContainerStarted","Data":"41ae1df0fdb6ae7e5c28e2dde3cc15f262bca9394db82a2dbb05d25e9234c674"} Feb 18 00:54:56 crc kubenswrapper[4858]: I0218 00:54:56.849444 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9m2qq"] Feb 18 00:54:56 crc kubenswrapper[4858]: E0218 00:54:56.849893 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25e5d349-2a21-4825-921a-f391f079db96" containerName="mariadb-account-create-update" Feb 18 00:54:56 crc kubenswrapper[4858]: I0218 00:54:56.849914 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="25e5d349-2a21-4825-921a-f391f079db96" containerName="mariadb-account-create-update" Feb 18 00:54:56 crc kubenswrapper[4858]: E0218 00:54:56.849927 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f92966-13bb-4fa6-b5d6-388baaf16288" containerName="mariadb-account-create-update" Feb 18 00:54:56 crc kubenswrapper[4858]: I0218 00:54:56.849933 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f92966-13bb-4fa6-b5d6-388baaf16288" containerName="mariadb-account-create-update" Feb 18 00:54:56 crc kubenswrapper[4858]: E0218 00:54:56.849951 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbd10d3-a140-407f-b44d-52a42e8dec44" containerName="mariadb-account-create-update" Feb 18 00:54:56 crc kubenswrapper[4858]: I0218 00:54:56.849957 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbd10d3-a140-407f-b44d-52a42e8dec44" containerName="mariadb-account-create-update" Feb 18 00:54:56 crc kubenswrapper[4858]: E0218 00:54:56.849969 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f482994d-5817-4411-861c-b9634b40bf88" containerName="mariadb-database-create" Feb 18 00:54:56 crc kubenswrapper[4858]: I0218 00:54:56.849975 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f482994d-5817-4411-861c-b9634b40bf88" containerName="mariadb-database-create" Feb 18 00:54:56 crc kubenswrapper[4858]: E0218 00:54:56.849985 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12869268-4147-4557-bcaf-c027d1478c29" containerName="mariadb-database-create" Feb 18 00:54:56 crc kubenswrapper[4858]: I0218 00:54:56.849991 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="12869268-4147-4557-bcaf-c027d1478c29" containerName="mariadb-database-create" Feb 18 00:54:56 crc kubenswrapper[4858]: E0218 00:54:56.850019 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9641f46c-7437-4828-aa73-a35c3c49c06f" containerName="mariadb-database-create" Feb 18 00:54:56 crc kubenswrapper[4858]: I0218 00:54:56.850025 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9641f46c-7437-4828-aa73-a35c3c49c06f" containerName="mariadb-database-create" Feb 18 00:54:56 crc kubenswrapper[4858]: I0218 00:54:56.850214 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f482994d-5817-4411-861c-b9634b40bf88" containerName="mariadb-database-create" Feb 18 00:54:56 crc kubenswrapper[4858]: I0218 00:54:56.850232 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbd10d3-a140-407f-b44d-52a42e8dec44" containerName="mariadb-account-create-update" Feb 18 00:54:56 crc kubenswrapper[4858]: I0218 00:54:56.850243 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f92966-13bb-4fa6-b5d6-388baaf16288" containerName="mariadb-account-create-update" Feb 18 00:54:56 crc kubenswrapper[4858]: I0218 00:54:56.850251 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9641f46c-7437-4828-aa73-a35c3c49c06f" containerName="mariadb-database-create" Feb 18 00:54:56 crc kubenswrapper[4858]: I0218 00:54:56.850273 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="25e5d349-2a21-4825-921a-f391f079db96" containerName="mariadb-account-create-update" Feb 18 00:54:56 crc kubenswrapper[4858]: I0218 00:54:56.850282 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="12869268-4147-4557-bcaf-c027d1478c29" containerName="mariadb-database-create" Feb 18 00:54:56 crc kubenswrapper[4858]: I0218 00:54:56.851023 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9m2qq" Feb 18 00:54:56 crc kubenswrapper[4858]: I0218 00:54:56.854134 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 18 00:54:56 crc kubenswrapper[4858]: I0218 00:54:56.854273 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 00:54:56 crc kubenswrapper[4858]: I0218 00:54:56.854596 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-9gnzw" Feb 18 00:54:56 crc kubenswrapper[4858]: I0218 00:54:56.858148 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9m2qq"] Feb 18 00:54:56 crc kubenswrapper[4858]: I0218 00:54:56.943087 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed63468b-fdca-49b9-b26c-8ab532261519-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9m2qq\" (UID: \"ed63468b-fdca-49b9-b26c-8ab532261519\") " pod="openstack/nova-cell0-conductor-db-sync-9m2qq" Feb 18 00:54:56 crc kubenswrapper[4858]: I0218 00:54:56.943157 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed63468b-fdca-49b9-b26c-8ab532261519-scripts\") pod \"nova-cell0-conductor-db-sync-9m2qq\" (UID: \"ed63468b-fdca-49b9-b26c-8ab532261519\") " pod="openstack/nova-cell0-conductor-db-sync-9m2qq" Feb 18 00:54:56 crc kubenswrapper[4858]: I0218 00:54:56.943488 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79dlk\" (UniqueName: \"kubernetes.io/projected/ed63468b-fdca-49b9-b26c-8ab532261519-kube-api-access-79dlk\") pod \"nova-cell0-conductor-db-sync-9m2qq\" (UID: \"ed63468b-fdca-49b9-b26c-8ab532261519\") " pod="openstack/nova-cell0-conductor-db-sync-9m2qq" Feb 18 00:54:56 crc kubenswrapper[4858]: I0218 00:54:56.943603 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed63468b-fdca-49b9-b26c-8ab532261519-config-data\") pod \"nova-cell0-conductor-db-sync-9m2qq\" (UID: \"ed63468b-fdca-49b9-b26c-8ab532261519\") " pod="openstack/nova-cell0-conductor-db-sync-9m2qq" Feb 18 00:54:57 crc kubenswrapper[4858]: I0218 00:54:57.044843 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed63468b-fdca-49b9-b26c-8ab532261519-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9m2qq\" (UID: \"ed63468b-fdca-49b9-b26c-8ab532261519\") " pod="openstack/nova-cell0-conductor-db-sync-9m2qq" Feb 18 00:54:57 crc kubenswrapper[4858]: I0218 00:54:57.044928 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed63468b-fdca-49b9-b26c-8ab532261519-scripts\") pod \"nova-cell0-conductor-db-sync-9m2qq\" (UID: \"ed63468b-fdca-49b9-b26c-8ab532261519\") " pod="openstack/nova-cell0-conductor-db-sync-9m2qq" Feb 18 00:54:57 crc kubenswrapper[4858]: I0218 00:54:57.045010 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79dlk\" (UniqueName: \"kubernetes.io/projected/ed63468b-fdca-49b9-b26c-8ab532261519-kube-api-access-79dlk\") pod \"nova-cell0-conductor-db-sync-9m2qq\" (UID: \"ed63468b-fdca-49b9-b26c-8ab532261519\") " pod="openstack/nova-cell0-conductor-db-sync-9m2qq" Feb 18 00:54:57 crc kubenswrapper[4858]: I0218 00:54:57.045032 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed63468b-fdca-49b9-b26c-8ab532261519-config-data\") pod \"nova-cell0-conductor-db-sync-9m2qq\" (UID: \"ed63468b-fdca-49b9-b26c-8ab532261519\") " pod="openstack/nova-cell0-conductor-db-sync-9m2qq" Feb 18 00:54:57 crc kubenswrapper[4858]: I0218 00:54:57.048702 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed63468b-fdca-49b9-b26c-8ab532261519-scripts\") pod \"nova-cell0-conductor-db-sync-9m2qq\" (UID: \"ed63468b-fdca-49b9-b26c-8ab532261519\") " pod="openstack/nova-cell0-conductor-db-sync-9m2qq" Feb 18 00:54:57 crc kubenswrapper[4858]: I0218 00:54:57.049160 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed63468b-fdca-49b9-b26c-8ab532261519-config-data\") pod \"nova-cell0-conductor-db-sync-9m2qq\" (UID: \"ed63468b-fdca-49b9-b26c-8ab532261519\") " pod="openstack/nova-cell0-conductor-db-sync-9m2qq" Feb 18 00:54:57 crc kubenswrapper[4858]: I0218 00:54:57.049947 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed63468b-fdca-49b9-b26c-8ab532261519-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9m2qq\" (UID: \"ed63468b-fdca-49b9-b26c-8ab532261519\") " pod="openstack/nova-cell0-conductor-db-sync-9m2qq" Feb 18 00:54:57 crc kubenswrapper[4858]: I0218 00:54:57.068062 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79dlk\" (UniqueName: \"kubernetes.io/projected/ed63468b-fdca-49b9-b26c-8ab532261519-kube-api-access-79dlk\") pod \"nova-cell0-conductor-db-sync-9m2qq\" (UID: \"ed63468b-fdca-49b9-b26c-8ab532261519\") " pod="openstack/nova-cell0-conductor-db-sync-9m2qq" Feb 18 00:54:57 crc kubenswrapper[4858]: I0218 00:54:57.179442 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9m2qq" Feb 18 00:54:57 crc kubenswrapper[4858]: I0218 00:54:57.416255 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"933ab558-a1b3-4850-881b-54a3a69d7320","Type":"ContainerStarted","Data":"958cecfc10bc441162b2670beb01b3f29f70a984371cf361ac3973f7933cf3a4"} Feb 18 00:54:57 crc kubenswrapper[4858]: I0218 00:54:57.640005 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9m2qq"] Feb 18 00:54:57 crc kubenswrapper[4858]: W0218 00:54:57.640722 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded63468b_fdca_49b9_b26c_8ab532261519.slice/crio-1be29a5876cff767c0bb3f5d76b267fe56c0e353bfdb410b2dced7efd66d8859 WatchSource:0}: Error finding container 1be29a5876cff767c0bb3f5d76b267fe56c0e353bfdb410b2dced7efd66d8859: Status 404 returned error can't find the container with id 1be29a5876cff767c0bb3f5d76b267fe56c0e353bfdb410b2dced7efd66d8859 Feb 18 00:54:58 crc kubenswrapper[4858]: I0218 00:54:58.429359 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9m2qq" event={"ID":"ed63468b-fdca-49b9-b26c-8ab532261519","Type":"ContainerStarted","Data":"1be29a5876cff767c0bb3f5d76b267fe56c0e353bfdb410b2dced7efd66d8859"} Feb 18 00:54:58 crc kubenswrapper[4858]: I0218 00:54:58.438202 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"933ab558-a1b3-4850-881b-54a3a69d7320","Type":"ContainerStarted","Data":"b092bd32c7d61bb65bee43a332b7edb27092685bb74af180c97dfaec05480659"} Feb 18 00:54:59 crc kubenswrapper[4858]: I0218 00:54:59.450168 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"933ab558-a1b3-4850-881b-54a3a69d7320","Type":"ContainerStarted","Data":"37641988932c91c256173c2c3a73abc13f806d720f3c5bf7b130ee5861292ce8"} Feb 18 00:54:59 crc kubenswrapper[4858]: I0218 00:54:59.451881 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 00:54:59 crc kubenswrapper[4858]: I0218 00:54:59.485876 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.496476717 podStartE2EDuration="5.485859678s" podCreationTimestamp="2026-02-18 00:54:54 +0000 UTC" firstStartedPulling="2026-02-18 00:54:55.516370055 +0000 UTC m=+1248.822206787" lastFinishedPulling="2026-02-18 00:54:58.505753016 +0000 UTC m=+1251.811589748" observedRunningTime="2026-02-18 00:54:59.480013955 +0000 UTC m=+1252.785850717" watchObservedRunningTime="2026-02-18 00:54:59.485859678 +0000 UTC m=+1252.791696410" Feb 18 00:55:05 crc kubenswrapper[4858]: I0218 00:55:05.525533 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9m2qq" event={"ID":"ed63468b-fdca-49b9-b26c-8ab532261519","Type":"ContainerStarted","Data":"d23e1c2e015054ff05db92aed4e0c3e9e1226951c591d0221622a92a9e337ffd"} Feb 18 00:55:14 crc kubenswrapper[4858]: I0218 00:55:14.642565 4858 generic.go:334] "Generic (PLEG): container finished" podID="ed63468b-fdca-49b9-b26c-8ab532261519" containerID="d23e1c2e015054ff05db92aed4e0c3e9e1226951c591d0221622a92a9e337ffd" exitCode=0 Feb 18 00:55:14 crc kubenswrapper[4858]: I0218 00:55:14.642663 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9m2qq" event={"ID":"ed63468b-fdca-49b9-b26c-8ab532261519","Type":"ContainerDied","Data":"d23e1c2e015054ff05db92aed4e0c3e9e1226951c591d0221622a92a9e337ffd"} Feb 18 00:55:16 crc kubenswrapper[4858]: I0218 00:55:16.133105 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9m2qq" Feb 18 00:55:16 crc kubenswrapper[4858]: I0218 00:55:16.304760 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed63468b-fdca-49b9-b26c-8ab532261519-combined-ca-bundle\") pod \"ed63468b-fdca-49b9-b26c-8ab532261519\" (UID: \"ed63468b-fdca-49b9-b26c-8ab532261519\") " Feb 18 00:55:16 crc kubenswrapper[4858]: I0218 00:55:16.305058 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79dlk\" (UniqueName: \"kubernetes.io/projected/ed63468b-fdca-49b9-b26c-8ab532261519-kube-api-access-79dlk\") pod \"ed63468b-fdca-49b9-b26c-8ab532261519\" (UID: \"ed63468b-fdca-49b9-b26c-8ab532261519\") " Feb 18 00:55:16 crc kubenswrapper[4858]: I0218 00:55:16.305168 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed63468b-fdca-49b9-b26c-8ab532261519-scripts\") pod \"ed63468b-fdca-49b9-b26c-8ab532261519\" (UID: \"ed63468b-fdca-49b9-b26c-8ab532261519\") " Feb 18 00:55:16 crc kubenswrapper[4858]: I0218 00:55:16.305250 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed63468b-fdca-49b9-b26c-8ab532261519-config-data\") pod \"ed63468b-fdca-49b9-b26c-8ab532261519\" (UID: \"ed63468b-fdca-49b9-b26c-8ab532261519\") " Feb 18 00:55:16 crc kubenswrapper[4858]: I0218 00:55:16.311281 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed63468b-fdca-49b9-b26c-8ab532261519-scripts" (OuterVolumeSpecName: "scripts") pod "ed63468b-fdca-49b9-b26c-8ab532261519" (UID: "ed63468b-fdca-49b9-b26c-8ab532261519"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:55:16 crc kubenswrapper[4858]: I0218 00:55:16.312248 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed63468b-fdca-49b9-b26c-8ab532261519-kube-api-access-79dlk" (OuterVolumeSpecName: "kube-api-access-79dlk") pod "ed63468b-fdca-49b9-b26c-8ab532261519" (UID: "ed63468b-fdca-49b9-b26c-8ab532261519"). InnerVolumeSpecName "kube-api-access-79dlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:55:16 crc kubenswrapper[4858]: I0218 00:55:16.339340 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed63468b-fdca-49b9-b26c-8ab532261519-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed63468b-fdca-49b9-b26c-8ab532261519" (UID: "ed63468b-fdca-49b9-b26c-8ab532261519"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:55:16 crc kubenswrapper[4858]: I0218 00:55:16.343128 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed63468b-fdca-49b9-b26c-8ab532261519-config-data" (OuterVolumeSpecName: "config-data") pod "ed63468b-fdca-49b9-b26c-8ab532261519" (UID: "ed63468b-fdca-49b9-b26c-8ab532261519"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:55:16 crc kubenswrapper[4858]: I0218 00:55:16.409418 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed63468b-fdca-49b9-b26c-8ab532261519-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:16 crc kubenswrapper[4858]: I0218 00:55:16.410225 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79dlk\" (UniqueName: \"kubernetes.io/projected/ed63468b-fdca-49b9-b26c-8ab532261519-kube-api-access-79dlk\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:16 crc kubenswrapper[4858]: I0218 00:55:16.410270 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed63468b-fdca-49b9-b26c-8ab532261519-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:16 crc kubenswrapper[4858]: I0218 00:55:16.410288 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed63468b-fdca-49b9-b26c-8ab532261519-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:16 crc kubenswrapper[4858]: I0218 00:55:16.680223 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9m2qq" event={"ID":"ed63468b-fdca-49b9-b26c-8ab532261519","Type":"ContainerDied","Data":"1be29a5876cff767c0bb3f5d76b267fe56c0e353bfdb410b2dced7efd66d8859"} Feb 18 00:55:16 crc kubenswrapper[4858]: I0218 00:55:16.680283 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1be29a5876cff767c0bb3f5d76b267fe56c0e353bfdb410b2dced7efd66d8859" Feb 18 00:55:16 crc kubenswrapper[4858]: I0218 00:55:16.680389 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9m2qq" Feb 18 00:55:16 crc kubenswrapper[4858]: I0218 00:55:16.833336 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 00:55:16 crc kubenswrapper[4858]: E0218 00:55:16.833826 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed63468b-fdca-49b9-b26c-8ab532261519" containerName="nova-cell0-conductor-db-sync" Feb 18 00:55:16 crc kubenswrapper[4858]: I0218 00:55:16.833846 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed63468b-fdca-49b9-b26c-8ab532261519" containerName="nova-cell0-conductor-db-sync" Feb 18 00:55:16 crc kubenswrapper[4858]: I0218 00:55:16.834097 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed63468b-fdca-49b9-b26c-8ab532261519" containerName="nova-cell0-conductor-db-sync" Feb 18 00:55:16 crc kubenswrapper[4858]: I0218 00:55:16.834900 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 00:55:16 crc kubenswrapper[4858]: I0218 00:55:16.838477 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-9gnzw" Feb 18 00:55:16 crc kubenswrapper[4858]: I0218 00:55:16.840312 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 00:55:16 crc kubenswrapper[4858]: I0218 00:55:16.856701 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 00:55:16 crc kubenswrapper[4858]: I0218 00:55:16.922263 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpt5q\" (UniqueName: \"kubernetes.io/projected/f054135a-7843-4399-9b3e-8d92bb101e7c-kube-api-access-wpt5q\") pod \"nova-cell0-conductor-0\" (UID: \"f054135a-7843-4399-9b3e-8d92bb101e7c\") " pod="openstack/nova-cell0-conductor-0" Feb 18 00:55:16 crc kubenswrapper[4858]: I0218 00:55:16.922475 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f054135a-7843-4399-9b3e-8d92bb101e7c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f054135a-7843-4399-9b3e-8d92bb101e7c\") " pod="openstack/nova-cell0-conductor-0" Feb 18 00:55:16 crc kubenswrapper[4858]: I0218 00:55:16.922594 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f054135a-7843-4399-9b3e-8d92bb101e7c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f054135a-7843-4399-9b3e-8d92bb101e7c\") " pod="openstack/nova-cell0-conductor-0" Feb 18 00:55:17 crc kubenswrapper[4858]: I0218 00:55:17.024164 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpt5q\" (UniqueName: \"kubernetes.io/projected/f054135a-7843-4399-9b3e-8d92bb101e7c-kube-api-access-wpt5q\") pod \"nova-cell0-conductor-0\" (UID: \"f054135a-7843-4399-9b3e-8d92bb101e7c\") " pod="openstack/nova-cell0-conductor-0" Feb 18 00:55:17 crc kubenswrapper[4858]: I0218 00:55:17.024347 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f054135a-7843-4399-9b3e-8d92bb101e7c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f054135a-7843-4399-9b3e-8d92bb101e7c\") " pod="openstack/nova-cell0-conductor-0" Feb 18 00:55:17 crc kubenswrapper[4858]: I0218 00:55:17.024434 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f054135a-7843-4399-9b3e-8d92bb101e7c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f054135a-7843-4399-9b3e-8d92bb101e7c\") " pod="openstack/nova-cell0-conductor-0" Feb 18 00:55:17 crc kubenswrapper[4858]: I0218 00:55:17.031360 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f054135a-7843-4399-9b3e-8d92bb101e7c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f054135a-7843-4399-9b3e-8d92bb101e7c\") " pod="openstack/nova-cell0-conductor-0" Feb 18 00:55:17 crc kubenswrapper[4858]: I0218 00:55:17.031823 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f054135a-7843-4399-9b3e-8d92bb101e7c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f054135a-7843-4399-9b3e-8d92bb101e7c\") " pod="openstack/nova-cell0-conductor-0" Feb 18 00:55:17 crc kubenswrapper[4858]: I0218 00:55:17.045638 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpt5q\" (UniqueName: \"kubernetes.io/projected/f054135a-7843-4399-9b3e-8d92bb101e7c-kube-api-access-wpt5q\") pod \"nova-cell0-conductor-0\" (UID: \"f054135a-7843-4399-9b3e-8d92bb101e7c\") " pod="openstack/nova-cell0-conductor-0" Feb 18 00:55:17 crc kubenswrapper[4858]: I0218 00:55:17.154287 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 00:55:17 crc kubenswrapper[4858]: I0218 00:55:17.658403 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 00:55:17 crc kubenswrapper[4858]: W0218 00:55:17.666581 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf054135a_7843_4399_9b3e_8d92bb101e7c.slice/crio-77c035b3a069f670c2ed6c23fa24a7e741f8a0776f5ab5c00ccd0135cfde6bfa WatchSource:0}: Error finding container 77c035b3a069f670c2ed6c23fa24a7e741f8a0776f5ab5c00ccd0135cfde6bfa: Status 404 returned error can't find the container with id 77c035b3a069f670c2ed6c23fa24a7e741f8a0776f5ab5c00ccd0135cfde6bfa Feb 18 00:55:17 crc kubenswrapper[4858]: I0218 00:55:17.697343 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f054135a-7843-4399-9b3e-8d92bb101e7c","Type":"ContainerStarted","Data":"77c035b3a069f670c2ed6c23fa24a7e741f8a0776f5ab5c00ccd0135cfde6bfa"} Feb 18 00:55:18 crc kubenswrapper[4858]: I0218 00:55:18.710744 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f054135a-7843-4399-9b3e-8d92bb101e7c","Type":"ContainerStarted","Data":"1249f0a9997051b7f68766b38e37c9c2da80b28588df59efe5550fa80f3a0768"} Feb 18 00:55:18 crc kubenswrapper[4858]: I0218 00:55:18.712260 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 18 00:55:18 crc kubenswrapper[4858]: I0218 00:55:18.743630 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.743601509 podStartE2EDuration="2.743601509s" podCreationTimestamp="2026-02-18 00:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:55:18.739484729 +0000 UTC m=+1272.045321471" watchObservedRunningTime="2026-02-18 00:55:18.743601509 +0000 UTC m=+1272.049438281" Feb 18 00:55:22 crc kubenswrapper[4858]: I0218 00:55:22.195841 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 18 00:55:22 crc kubenswrapper[4858]: I0218 00:55:22.744295 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-9xtk4"] Feb 18 00:55:22 crc kubenswrapper[4858]: I0218 00:55:22.745758 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9xtk4" Feb 18 00:55:22 crc kubenswrapper[4858]: I0218 00:55:22.748235 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 18 00:55:22 crc kubenswrapper[4858]: I0218 00:55:22.748477 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 18 00:55:22 crc kubenswrapper[4858]: I0218 00:55:22.776376 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-9xtk4"] Feb 18 00:55:22 crc kubenswrapper[4858]: I0218 00:55:22.888010 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570680e8-0b24-4814-a4ea-7f70e5ed1622-config-data\") pod \"nova-cell0-cell-mapping-9xtk4\" (UID: \"570680e8-0b24-4814-a4ea-7f70e5ed1622\") " pod="openstack/nova-cell0-cell-mapping-9xtk4" Feb 18 00:55:22 crc kubenswrapper[4858]: I0218 00:55:22.888065 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv9qf\" (UniqueName: \"kubernetes.io/projected/570680e8-0b24-4814-a4ea-7f70e5ed1622-kube-api-access-tv9qf\") pod \"nova-cell0-cell-mapping-9xtk4\" (UID: \"570680e8-0b24-4814-a4ea-7f70e5ed1622\") " pod="openstack/nova-cell0-cell-mapping-9xtk4" Feb 18 00:55:22 crc kubenswrapper[4858]: I0218 00:55:22.888170 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/570680e8-0b24-4814-a4ea-7f70e5ed1622-scripts\") pod \"nova-cell0-cell-mapping-9xtk4\" (UID: \"570680e8-0b24-4814-a4ea-7f70e5ed1622\") " pod="openstack/nova-cell0-cell-mapping-9xtk4" Feb 18 00:55:22 crc kubenswrapper[4858]: I0218 00:55:22.888354 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570680e8-0b24-4814-a4ea-7f70e5ed1622-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-9xtk4\" (UID: \"570680e8-0b24-4814-a4ea-7f70e5ed1622\") " pod="openstack/nova-cell0-cell-mapping-9xtk4" Feb 18 00:55:22 crc kubenswrapper[4858]: I0218 00:55:22.919335 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 00:55:22 crc kubenswrapper[4858]: I0218 00:55:22.920872 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:55:22 crc kubenswrapper[4858]: I0218 00:55:22.931809 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 00:55:22 crc kubenswrapper[4858]: I0218 00:55:22.943647 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:55:22 crc kubenswrapper[4858]: I0218 00:55:22.979455 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:55:22 crc kubenswrapper[4858]: I0218 00:55:22.981008 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 00:55:22 crc kubenswrapper[4858]: I0218 00:55:22.984600 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 00:55:22 crc kubenswrapper[4858]: I0218 00:55:22.991607 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570680e8-0b24-4814-a4ea-7f70e5ed1622-config-data\") pod \"nova-cell0-cell-mapping-9xtk4\" (UID: \"570680e8-0b24-4814-a4ea-7f70e5ed1622\") " pod="openstack/nova-cell0-cell-mapping-9xtk4" Feb 18 00:55:22 crc kubenswrapper[4858]: I0218 00:55:22.991665 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv9qf\" (UniqueName: \"kubernetes.io/projected/570680e8-0b24-4814-a4ea-7f70e5ed1622-kube-api-access-tv9qf\") pod \"nova-cell0-cell-mapping-9xtk4\" (UID: \"570680e8-0b24-4814-a4ea-7f70e5ed1622\") " pod="openstack/nova-cell0-cell-mapping-9xtk4" Feb 18 00:55:22 crc kubenswrapper[4858]: I0218 00:55:22.991711 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/570680e8-0b24-4814-a4ea-7f70e5ed1622-scripts\") pod \"nova-cell0-cell-mapping-9xtk4\" (UID: \"570680e8-0b24-4814-a4ea-7f70e5ed1622\") " pod="openstack/nova-cell0-cell-mapping-9xtk4" Feb 18 00:55:22 crc kubenswrapper[4858]: I0218 00:55:22.991794 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570680e8-0b24-4814-a4ea-7f70e5ed1622-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-9xtk4\" (UID: \"570680e8-0b24-4814-a4ea-7f70e5ed1622\") " pod="openstack/nova-cell0-cell-mapping-9xtk4" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.002157 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/570680e8-0b24-4814-a4ea-7f70e5ed1622-scripts\") pod \"nova-cell0-cell-mapping-9xtk4\" (UID: \"570680e8-0b24-4814-a4ea-7f70e5ed1622\") " pod="openstack/nova-cell0-cell-mapping-9xtk4" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.007627 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570680e8-0b24-4814-a4ea-7f70e5ed1622-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-9xtk4\" (UID: \"570680e8-0b24-4814-a4ea-7f70e5ed1622\") " pod="openstack/nova-cell0-cell-mapping-9xtk4" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.008842 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.011531 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570680e8-0b24-4814-a4ea-7f70e5ed1622-config-data\") pod \"nova-cell0-cell-mapping-9xtk4\" (UID: \"570680e8-0b24-4814-a4ea-7f70e5ed1622\") " pod="openstack/nova-cell0-cell-mapping-9xtk4" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.036078 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv9qf\" (UniqueName: \"kubernetes.io/projected/570680e8-0b24-4814-a4ea-7f70e5ed1622-kube-api-access-tv9qf\") pod \"nova-cell0-cell-mapping-9xtk4\" (UID: \"570680e8-0b24-4814-a4ea-7f70e5ed1622\") " pod="openstack/nova-cell0-cell-mapping-9xtk4" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.051108 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.053121 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.062704 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.070966 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9xtk4" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.081371 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.097631 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k59ch\" (UniqueName: \"kubernetes.io/projected/694d16d9-f59c-48f2-90a6-3994846c0ca5-kube-api-access-k59ch\") pod \"nova-api-0\" (UID: \"694d16d9-f59c-48f2-90a6-3994846c0ca5\") " pod="openstack/nova-api-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.097683 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/694d16d9-f59c-48f2-90a6-3994846c0ca5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"694d16d9-f59c-48f2-90a6-3994846c0ca5\") " pod="openstack/nova-api-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.097730 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bcc4262-82a0-46b5-bdd4-ec2465032a84-config-data\") pod \"nova-scheduler-0\" (UID: \"1bcc4262-82a0-46b5-bdd4-ec2465032a84\") " pod="openstack/nova-scheduler-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.097762 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/694d16d9-f59c-48f2-90a6-3994846c0ca5-logs\") pod \"nova-api-0\" (UID: \"694d16d9-f59c-48f2-90a6-3994846c0ca5\") " pod="openstack/nova-api-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.097803 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptfpk\" (UniqueName: \"kubernetes.io/projected/1bcc4262-82a0-46b5-bdd4-ec2465032a84-kube-api-access-ptfpk\") pod \"nova-scheduler-0\" (UID: \"1bcc4262-82a0-46b5-bdd4-ec2465032a84\") " pod="openstack/nova-scheduler-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.097859 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcc4262-82a0-46b5-bdd4-ec2465032a84-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1bcc4262-82a0-46b5-bdd4-ec2465032a84\") " pod="openstack/nova-scheduler-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.097886 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/694d16d9-f59c-48f2-90a6-3994846c0ca5-config-data\") pod \"nova-api-0\" (UID: \"694d16d9-f59c-48f2-90a6-3994846c0ca5\") " pod="openstack/nova-api-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.148926 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.150607 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.152242 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.182867 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.202626 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptfpk\" (UniqueName: \"kubernetes.io/projected/1bcc4262-82a0-46b5-bdd4-ec2465032a84-kube-api-access-ptfpk\") pod \"nova-scheduler-0\" (UID: \"1bcc4262-82a0-46b5-bdd4-ec2465032a84\") " pod="openstack/nova-scheduler-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.202708 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcc4262-82a0-46b5-bdd4-ec2465032a84-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1bcc4262-82a0-46b5-bdd4-ec2465032a84\") " pod="openstack/nova-scheduler-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.202739 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/694d16d9-f59c-48f2-90a6-3994846c0ca5-config-data\") pod \"nova-api-0\" (UID: \"694d16d9-f59c-48f2-90a6-3994846c0ca5\") " pod="openstack/nova-api-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.202791 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k59ch\" (UniqueName: \"kubernetes.io/projected/694d16d9-f59c-48f2-90a6-3994846c0ca5-kube-api-access-k59ch\") pod \"nova-api-0\" (UID: \"694d16d9-f59c-48f2-90a6-3994846c0ca5\") " pod="openstack/nova-api-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.202818 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/694d16d9-f59c-48f2-90a6-3994846c0ca5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"694d16d9-f59c-48f2-90a6-3994846c0ca5\") " pod="openstack/nova-api-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.202858 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/057985d0-f8a3-4924-af98-13a66b730569-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"057985d0-f8a3-4924-af98-13a66b730569\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.202880 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bcc4262-82a0-46b5-bdd4-ec2465032a84-config-data\") pod \"nova-scheduler-0\" (UID: \"1bcc4262-82a0-46b5-bdd4-ec2465032a84\") " pod="openstack/nova-scheduler-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.202895 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/057985d0-f8a3-4924-af98-13a66b730569-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"057985d0-f8a3-4924-af98-13a66b730569\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.202925 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/694d16d9-f59c-48f2-90a6-3994846c0ca5-logs\") pod \"nova-api-0\" (UID: \"694d16d9-f59c-48f2-90a6-3994846c0ca5\") " pod="openstack/nova-api-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.202940 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv9wr\" (UniqueName: \"kubernetes.io/projected/057985d0-f8a3-4924-af98-13a66b730569-kube-api-access-kv9wr\") pod \"nova-cell1-novncproxy-0\" (UID: \"057985d0-f8a3-4924-af98-13a66b730569\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.205057 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/694d16d9-f59c-48f2-90a6-3994846c0ca5-logs\") pod \"nova-api-0\" (UID: \"694d16d9-f59c-48f2-90a6-3994846c0ca5\") " pod="openstack/nova-api-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.211716 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bcc4262-82a0-46b5-bdd4-ec2465032a84-config-data\") pod \"nova-scheduler-0\" (UID: \"1bcc4262-82a0-46b5-bdd4-ec2465032a84\") " pod="openstack/nova-scheduler-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.214673 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcc4262-82a0-46b5-bdd4-ec2465032a84-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1bcc4262-82a0-46b5-bdd4-ec2465032a84\") " pod="openstack/nova-scheduler-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.222428 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/694d16d9-f59c-48f2-90a6-3994846c0ca5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"694d16d9-f59c-48f2-90a6-3994846c0ca5\") " pod="openstack/nova-api-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.228328 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptfpk\" (UniqueName: \"kubernetes.io/projected/1bcc4262-82a0-46b5-bdd4-ec2465032a84-kube-api-access-ptfpk\") pod \"nova-scheduler-0\" (UID: \"1bcc4262-82a0-46b5-bdd4-ec2465032a84\") " pod="openstack/nova-scheduler-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.234124 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/694d16d9-f59c-48f2-90a6-3994846c0ca5-config-data\") pod \"nova-api-0\" (UID: \"694d16d9-f59c-48f2-90a6-3994846c0ca5\") " pod="openstack/nova-api-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.234587 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k59ch\" (UniqueName: \"kubernetes.io/projected/694d16d9-f59c-48f2-90a6-3994846c0ca5-kube-api-access-k59ch\") pod \"nova-api-0\" (UID: \"694d16d9-f59c-48f2-90a6-3994846c0ca5\") " pod="openstack/nova-api-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.246109 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.309817 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a04f637-4965-4374-b1de-cacc188d8d37-logs\") pod \"nova-metadata-0\" (UID: \"9a04f637-4965-4374-b1de-cacc188d8d37\") " pod="openstack/nova-metadata-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.309863 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a04f637-4965-4374-b1de-cacc188d8d37-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9a04f637-4965-4374-b1de-cacc188d8d37\") " pod="openstack/nova-metadata-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.309889 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a04f637-4965-4374-b1de-cacc188d8d37-config-data\") pod \"nova-metadata-0\" (UID: \"9a04f637-4965-4374-b1de-cacc188d8d37\") " pod="openstack/nova-metadata-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.310089 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdhgl\" (UniqueName: \"kubernetes.io/projected/9a04f637-4965-4374-b1de-cacc188d8d37-kube-api-access-qdhgl\") pod \"nova-metadata-0\" (UID: \"9a04f637-4965-4374-b1de-cacc188d8d37\") " pod="openstack/nova-metadata-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.310148 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/057985d0-f8a3-4924-af98-13a66b730569-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"057985d0-f8a3-4924-af98-13a66b730569\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.310176 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/057985d0-f8a3-4924-af98-13a66b730569-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"057985d0-f8a3-4924-af98-13a66b730569\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.310219 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv9wr\" (UniqueName: \"kubernetes.io/projected/057985d0-f8a3-4924-af98-13a66b730569-kube-api-access-kv9wr\") pod \"nova-cell1-novncproxy-0\" (UID: \"057985d0-f8a3-4924-af98-13a66b730569\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.322436 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78cd565959-bnkqb"] Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.324247 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-bnkqb" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.330581 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv9wr\" (UniqueName: \"kubernetes.io/projected/057985d0-f8a3-4924-af98-13a66b730569-kube-api-access-kv9wr\") pod \"nova-cell1-novncproxy-0\" (UID: \"057985d0-f8a3-4924-af98-13a66b730569\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.332238 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-bnkqb"] Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.333314 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/057985d0-f8a3-4924-af98-13a66b730569-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"057985d0-f8a3-4924-af98-13a66b730569\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.340894 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/057985d0-f8a3-4924-af98-13a66b730569-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"057985d0-f8a3-4924-af98-13a66b730569\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.414036 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdhgl\" (UniqueName: \"kubernetes.io/projected/9a04f637-4965-4374-b1de-cacc188d8d37-kube-api-access-qdhgl\") pod \"nova-metadata-0\" (UID: \"9a04f637-4965-4374-b1de-cacc188d8d37\") " pod="openstack/nova-metadata-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.414097 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-bnkqb\" (UID: \"69e75e4d-3e34-492b-8be5-15be3867f605\") " pod="openstack/dnsmasq-dns-78cd565959-bnkqb" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.414126 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-config\") pod \"dnsmasq-dns-78cd565959-bnkqb\" (UID: \"69e75e4d-3e34-492b-8be5-15be3867f605\") " pod="openstack/dnsmasq-dns-78cd565959-bnkqb" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.414151 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-bnkqb\" (UID: \"69e75e4d-3e34-492b-8be5-15be3867f605\") " pod="openstack/dnsmasq-dns-78cd565959-bnkqb" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.414178 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bl8w\" (UniqueName: \"kubernetes.io/projected/69e75e4d-3e34-492b-8be5-15be3867f605-kube-api-access-7bl8w\") pod \"dnsmasq-dns-78cd565959-bnkqb\" (UID: \"69e75e4d-3e34-492b-8be5-15be3867f605\") " pod="openstack/dnsmasq-dns-78cd565959-bnkqb" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.414249 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a04f637-4965-4374-b1de-cacc188d8d37-logs\") pod \"nova-metadata-0\" (UID: \"9a04f637-4965-4374-b1de-cacc188d8d37\") " pod="openstack/nova-metadata-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.414270 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a04f637-4965-4374-b1de-cacc188d8d37-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9a04f637-4965-4374-b1de-cacc188d8d37\") " pod="openstack/nova-metadata-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.414288 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a04f637-4965-4374-b1de-cacc188d8d37-config-data\") pod \"nova-metadata-0\" (UID: \"9a04f637-4965-4374-b1de-cacc188d8d37\") " pod="openstack/nova-metadata-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.414308 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-bnkqb\" (UID: \"69e75e4d-3e34-492b-8be5-15be3867f605\") " pod="openstack/dnsmasq-dns-78cd565959-bnkqb" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.414316 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.414332 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-dns-svc\") pod \"dnsmasq-dns-78cd565959-bnkqb\" (UID: \"69e75e4d-3e34-492b-8be5-15be3867f605\") " pod="openstack/dnsmasq-dns-78cd565959-bnkqb" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.414992 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a04f637-4965-4374-b1de-cacc188d8d37-logs\") pod \"nova-metadata-0\" (UID: \"9a04f637-4965-4374-b1de-cacc188d8d37\") " pod="openstack/nova-metadata-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.422819 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a04f637-4965-4374-b1de-cacc188d8d37-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9a04f637-4965-4374-b1de-cacc188d8d37\") " pod="openstack/nova-metadata-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.423711 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a04f637-4965-4374-b1de-cacc188d8d37-config-data\") pod \"nova-metadata-0\" (UID: \"9a04f637-4965-4374-b1de-cacc188d8d37\") " pod="openstack/nova-metadata-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.428938 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdhgl\" (UniqueName: \"kubernetes.io/projected/9a04f637-4965-4374-b1de-cacc188d8d37-kube-api-access-qdhgl\") pod \"nova-metadata-0\" (UID: \"9a04f637-4965-4374-b1de-cacc188d8d37\") " pod="openstack/nova-metadata-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.516512 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-bnkqb\" (UID: \"69e75e4d-3e34-492b-8be5-15be3867f605\") " pod="openstack/dnsmasq-dns-78cd565959-bnkqb" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.516564 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-dns-svc\") pod \"dnsmasq-dns-78cd565959-bnkqb\" (UID: \"69e75e4d-3e34-492b-8be5-15be3867f605\") " pod="openstack/dnsmasq-dns-78cd565959-bnkqb" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.516745 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-bnkqb\" (UID: \"69e75e4d-3e34-492b-8be5-15be3867f605\") " pod="openstack/dnsmasq-dns-78cd565959-bnkqb" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.516774 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-config\") pod \"dnsmasq-dns-78cd565959-bnkqb\" (UID: \"69e75e4d-3e34-492b-8be5-15be3867f605\") " pod="openstack/dnsmasq-dns-78cd565959-bnkqb" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.516797 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-bnkqb\" (UID: \"69e75e4d-3e34-492b-8be5-15be3867f605\") " pod="openstack/dnsmasq-dns-78cd565959-bnkqb" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.516825 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bl8w\" (UniqueName: \"kubernetes.io/projected/69e75e4d-3e34-492b-8be5-15be3867f605-kube-api-access-7bl8w\") pod \"dnsmasq-dns-78cd565959-bnkqb\" (UID: \"69e75e4d-3e34-492b-8be5-15be3867f605\") " pod="openstack/dnsmasq-dns-78cd565959-bnkqb" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.518181 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-bnkqb\" (UID: \"69e75e4d-3e34-492b-8be5-15be3867f605\") " pod="openstack/dnsmasq-dns-78cd565959-bnkqb" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.518191 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-bnkqb\" (UID: \"69e75e4d-3e34-492b-8be5-15be3867f605\") " pod="openstack/dnsmasq-dns-78cd565959-bnkqb" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.518266 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-bnkqb\" (UID: \"69e75e4d-3e34-492b-8be5-15be3867f605\") " pod="openstack/dnsmasq-dns-78cd565959-bnkqb" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.518773 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-config\") pod \"dnsmasq-dns-78cd565959-bnkqb\" (UID: \"69e75e4d-3e34-492b-8be5-15be3867f605\") " pod="openstack/dnsmasq-dns-78cd565959-bnkqb" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.518843 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-dns-svc\") pod \"dnsmasq-dns-78cd565959-bnkqb\" (UID: \"69e75e4d-3e34-492b-8be5-15be3867f605\") " pod="openstack/dnsmasq-dns-78cd565959-bnkqb" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.538444 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bl8w\" (UniqueName: \"kubernetes.io/projected/69e75e4d-3e34-492b-8be5-15be3867f605-kube-api-access-7bl8w\") pod \"dnsmasq-dns-78cd565959-bnkqb\" (UID: \"69e75e4d-3e34-492b-8be5-15be3867f605\") " pod="openstack/dnsmasq-dns-78cd565959-bnkqb" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.594027 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.625985 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.671036 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-bnkqb" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.778585 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-9xtk4"] Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.926588 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.967075 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qxsd9"] Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.968449 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qxsd9" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.970380 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.970610 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 18 00:55:23 crc kubenswrapper[4858]: I0218 00:55:23.977447 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qxsd9"] Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.032985 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0634c49e-271a-4c92-8313-d974f58cd273-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qxsd9\" (UID: \"0634c49e-271a-4c92-8313-d974f58cd273\") " pod="openstack/nova-cell1-conductor-db-sync-qxsd9" Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.033186 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0634c49e-271a-4c92-8313-d974f58cd273-config-data\") pod \"nova-cell1-conductor-db-sync-qxsd9\" (UID: \"0634c49e-271a-4c92-8313-d974f58cd273\") " pod="openstack/nova-cell1-conductor-db-sync-qxsd9" Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.033209 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcb6x\" (UniqueName: \"kubernetes.io/projected/0634c49e-271a-4c92-8313-d974f58cd273-kube-api-access-qcb6x\") pod \"nova-cell1-conductor-db-sync-qxsd9\" (UID: \"0634c49e-271a-4c92-8313-d974f58cd273\") " pod="openstack/nova-cell1-conductor-db-sync-qxsd9" Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.033249 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0634c49e-271a-4c92-8313-d974f58cd273-scripts\") pod \"nova-cell1-conductor-db-sync-qxsd9\" (UID: \"0634c49e-271a-4c92-8313-d974f58cd273\") " pod="openstack/nova-cell1-conductor-db-sync-qxsd9" Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.051609 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:55:24 crc kubenswrapper[4858]: W0218 00:55:24.054513 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bcc4262_82a0_46b5_bdd4_ec2465032a84.slice/crio-b8f64447e675a6d1615cf72880b7519aa45fab294b23e79a7849d6bb5d3fbd53 WatchSource:0}: Error finding container b8f64447e675a6d1615cf72880b7519aa45fab294b23e79a7849d6bb5d3fbd53: Status 404 returned error can't find the container with id b8f64447e675a6d1615cf72880b7519aa45fab294b23e79a7849d6bb5d3fbd53 Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.135541 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0634c49e-271a-4c92-8313-d974f58cd273-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qxsd9\" (UID: \"0634c49e-271a-4c92-8313-d974f58cd273\") " pod="openstack/nova-cell1-conductor-db-sync-qxsd9" Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.135643 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0634c49e-271a-4c92-8313-d974f58cd273-config-data\") pod \"nova-cell1-conductor-db-sync-qxsd9\" (UID: \"0634c49e-271a-4c92-8313-d974f58cd273\") " pod="openstack/nova-cell1-conductor-db-sync-qxsd9" Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.135663 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcb6x\" (UniqueName: \"kubernetes.io/projected/0634c49e-271a-4c92-8313-d974f58cd273-kube-api-access-qcb6x\") pod \"nova-cell1-conductor-db-sync-qxsd9\" (UID: \"0634c49e-271a-4c92-8313-d974f58cd273\") " pod="openstack/nova-cell1-conductor-db-sync-qxsd9" Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.135694 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0634c49e-271a-4c92-8313-d974f58cd273-scripts\") pod \"nova-cell1-conductor-db-sync-qxsd9\" (UID: \"0634c49e-271a-4c92-8313-d974f58cd273\") " pod="openstack/nova-cell1-conductor-db-sync-qxsd9" Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.141115 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0634c49e-271a-4c92-8313-d974f58cd273-scripts\") pod \"nova-cell1-conductor-db-sync-qxsd9\" (UID: \"0634c49e-271a-4c92-8313-d974f58cd273\") " pod="openstack/nova-cell1-conductor-db-sync-qxsd9" Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.141461 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0634c49e-271a-4c92-8313-d974f58cd273-config-data\") pod \"nova-cell1-conductor-db-sync-qxsd9\" (UID: \"0634c49e-271a-4c92-8313-d974f58cd273\") " pod="openstack/nova-cell1-conductor-db-sync-qxsd9" Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.142060 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0634c49e-271a-4c92-8313-d974f58cd273-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qxsd9\" (UID: \"0634c49e-271a-4c92-8313-d974f58cd273\") " pod="openstack/nova-cell1-conductor-db-sync-qxsd9" Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.153061 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcb6x\" (UniqueName: \"kubernetes.io/projected/0634c49e-271a-4c92-8313-d974f58cd273-kube-api-access-qcb6x\") pod \"nova-cell1-conductor-db-sync-qxsd9\" (UID: \"0634c49e-271a-4c92-8313-d974f58cd273\") " pod="openstack/nova-cell1-conductor-db-sync-qxsd9" Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.170927 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.300480 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qxsd9" Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.335764 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-bnkqb"] Feb 18 00:55:24 crc kubenswrapper[4858]: W0218 00:55:24.355165 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69e75e4d_3e34_492b_8be5_15be3867f605.slice/crio-21830b11e2a4f56e4d17bf9ae1a88fd39e7e53f3b3de25271b72a12902db45f2 WatchSource:0}: Error finding container 21830b11e2a4f56e4d17bf9ae1a88fd39e7e53f3b3de25271b72a12902db45f2: Status 404 returned error can't find the container with id 21830b11e2a4f56e4d17bf9ae1a88fd39e7e53f3b3de25271b72a12902db45f2 Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.378722 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.800004 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-bnkqb" event={"ID":"69e75e4d-3e34-492b-8be5-15be3867f605","Type":"ContainerStarted","Data":"21830b11e2a4f56e4d17bf9ae1a88fd39e7e53f3b3de25271b72a12902db45f2"} Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.810309 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9xtk4" event={"ID":"570680e8-0b24-4814-a4ea-7f70e5ed1622","Type":"ContainerStarted","Data":"e6b137dfe882a81d6e61324362a90a5575dfd56132d45fe40921a53ddb6d76ce"} Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.810386 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9xtk4" event={"ID":"570680e8-0b24-4814-a4ea-7f70e5ed1622","Type":"ContainerStarted","Data":"d0b6ce363700e3fdcd7bc3c1eb9e160891b5888440f7b0a4908da354d4e4f475"} Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.814145 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1bcc4262-82a0-46b5-bdd4-ec2465032a84","Type":"ContainerStarted","Data":"b8f64447e675a6d1615cf72880b7519aa45fab294b23e79a7849d6bb5d3fbd53"} Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.826297 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9a04f637-4965-4374-b1de-cacc188d8d37","Type":"ContainerStarted","Data":"90fc022df4f1ebe06c19e856b160b7c4b0b7a5b9da2833f07073d6869914fbe7"} Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.830742 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"057985d0-f8a3-4924-af98-13a66b730569","Type":"ContainerStarted","Data":"5e086b6fe196aa712afb3ac4dd1ea71df396631436819b06befa820583d13297"} Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.833042 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"694d16d9-f59c-48f2-90a6-3994846c0ca5","Type":"ContainerStarted","Data":"9e1c22135b1c9e66852da82704745a10b290f9667e50845fdc7a55218b402587"} Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.840406 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-9xtk4" podStartSLOduration=2.840388155 podStartE2EDuration="2.840388155s" podCreationTimestamp="2026-02-18 00:55:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:55:24.828145167 +0000 UTC m=+1278.133981899" watchObservedRunningTime="2026-02-18 00:55:24.840388155 +0000 UTC m=+1278.146224887" Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.849894 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 00:55:24 crc kubenswrapper[4858]: I0218 00:55:24.907381 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qxsd9"] Feb 18 00:55:24 crc kubenswrapper[4858]: W0218 00:55:24.949501 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0634c49e_271a_4c92_8313_d974f58cd273.slice/crio-8e637a3d93d86b92034e8d85ebbbe7e52fa6414e07f750e43905f143346cd333 WatchSource:0}: Error finding container 8e637a3d93d86b92034e8d85ebbbe7e52fa6414e07f750e43905f143346cd333: Status 404 returned error can't find the container with id 8e637a3d93d86b92034e8d85ebbbe7e52fa6414e07f750e43905f143346cd333 Feb 18 00:55:25 crc kubenswrapper[4858]: I0218 00:55:25.868379 4858 generic.go:334] "Generic (PLEG): container finished" podID="69e75e4d-3e34-492b-8be5-15be3867f605" containerID="30a2093867d200f77b2d6a55a663d0b6a05ad6cf73861e98b7913f250b810aa1" exitCode=0 Feb 18 00:55:25 crc kubenswrapper[4858]: I0218 00:55:25.868716 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-bnkqb" event={"ID":"69e75e4d-3e34-492b-8be5-15be3867f605","Type":"ContainerDied","Data":"30a2093867d200f77b2d6a55a663d0b6a05ad6cf73861e98b7913f250b810aa1"} Feb 18 00:55:25 crc kubenswrapper[4858]: I0218 00:55:25.872040 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qxsd9" event={"ID":"0634c49e-271a-4c92-8313-d974f58cd273","Type":"ContainerStarted","Data":"5dd29d6ab0f5291c6b919cd6b75c1064b0972c5f019703afcf6f4f1952ee5c1a"} Feb 18 00:55:25 crc kubenswrapper[4858]: I0218 00:55:25.872081 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qxsd9" event={"ID":"0634c49e-271a-4c92-8313-d974f58cd273","Type":"ContainerStarted","Data":"8e637a3d93d86b92034e8d85ebbbe7e52fa6414e07f750e43905f143346cd333"} Feb 18 00:55:25 crc kubenswrapper[4858]: I0218 00:55:25.907692 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-qxsd9" podStartSLOduration=2.90767372 podStartE2EDuration="2.90767372s" podCreationTimestamp="2026-02-18 00:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:55:25.902734391 +0000 UTC m=+1279.208571143" watchObservedRunningTime="2026-02-18 00:55:25.90767372 +0000 UTC m=+1279.213510452" Feb 18 00:55:26 crc kubenswrapper[4858]: I0218 00:55:26.851245 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:55:26 crc kubenswrapper[4858]: I0218 00:55:26.863268 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 00:55:28 crc kubenswrapper[4858]: I0218 00:55:28.923686 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1bcc4262-82a0-46b5-bdd4-ec2465032a84","Type":"ContainerStarted","Data":"edb5a4994c765341d3fcebd23803137cc098ffe7a4f710f043c85d9d1841d0eb"} Feb 18 00:55:28 crc kubenswrapper[4858]: I0218 00:55:28.933963 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9a04f637-4965-4374-b1de-cacc188d8d37","Type":"ContainerStarted","Data":"4ee1dff0932c45668be6490e23f78bbd23135f18692c4444d73989fe070ea33d"} Feb 18 00:55:28 crc kubenswrapper[4858]: I0218 00:55:28.934229 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9a04f637-4965-4374-b1de-cacc188d8d37","Type":"ContainerStarted","Data":"a8a34f2bb958f07d799f2ea3ecddf1a9862b2c1d53e6174de283c1a87bda86ea"} Feb 18 00:55:28 crc kubenswrapper[4858]: I0218 00:55:28.934426 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9a04f637-4965-4374-b1de-cacc188d8d37" containerName="nova-metadata-log" containerID="cri-o://a8a34f2bb958f07d799f2ea3ecddf1a9862b2c1d53e6174de283c1a87bda86ea" gracePeriod=30 Feb 18 00:55:28 crc kubenswrapper[4858]: I0218 00:55:28.934774 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9a04f637-4965-4374-b1de-cacc188d8d37" containerName="nova-metadata-metadata" containerID="cri-o://4ee1dff0932c45668be6490e23f78bbd23135f18692c4444d73989fe070ea33d" gracePeriod=30 Feb 18 00:55:28 crc kubenswrapper[4858]: I0218 00:55:28.944909 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-bnkqb" event={"ID":"69e75e4d-3e34-492b-8be5-15be3867f605","Type":"ContainerStarted","Data":"7ba75c2834264dc9531fe9a625497684c95afe88fd4408c499b1b95a40e1b5a0"} Feb 18 00:55:28 crc kubenswrapper[4858]: I0218 00:55:28.945956 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78cd565959-bnkqb" Feb 18 00:55:28 crc kubenswrapper[4858]: I0218 00:55:28.953534 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"057985d0-f8a3-4924-af98-13a66b730569","Type":"ContainerStarted","Data":"bd0d5443a1ef8a41588da6429d3acf0b9d0eae2ed48fb78df10e91e0337f2ac4"} Feb 18 00:55:28 crc kubenswrapper[4858]: I0218 00:55:28.953670 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="057985d0-f8a3-4924-af98-13a66b730569" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://bd0d5443a1ef8a41588da6429d3acf0b9d0eae2ed48fb78df10e91e0337f2ac4" gracePeriod=30 Feb 18 00:55:28 crc kubenswrapper[4858]: I0218 00:55:28.958077 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"694d16d9-f59c-48f2-90a6-3994846c0ca5","Type":"ContainerStarted","Data":"2c445a07d703735417845b3184855bc07fec1ad35ca110c23d0c875b0fce62e2"} Feb 18 00:55:28 crc kubenswrapper[4858]: I0218 00:55:28.958109 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"694d16d9-f59c-48f2-90a6-3994846c0ca5","Type":"ContainerStarted","Data":"0c413792cf8b06a52fe996e2ee2a09a962fe31b669ff48cb9cde9f2d6677ed6c"} Feb 18 00:55:28 crc kubenswrapper[4858]: I0218 00:55:28.961152 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.133765361 podStartE2EDuration="6.961138882s" podCreationTimestamp="2026-02-18 00:55:22 +0000 UTC" firstStartedPulling="2026-02-18 00:55:24.060314456 +0000 UTC m=+1277.366151188" lastFinishedPulling="2026-02-18 00:55:27.887687957 +0000 UTC m=+1281.193524709" observedRunningTime="2026-02-18 00:55:28.952775449 +0000 UTC m=+1282.258612181" watchObservedRunningTime="2026-02-18 00:55:28.961138882 +0000 UTC m=+1282.266975614" Feb 18 00:55:28 crc kubenswrapper[4858]: I0218 00:55:28.982710 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78cd565959-bnkqb" podStartSLOduration=5.982691498 podStartE2EDuration="5.982691498s" podCreationTimestamp="2026-02-18 00:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:55:28.97253282 +0000 UTC m=+1282.278369552" watchObservedRunningTime="2026-02-18 00:55:28.982691498 +0000 UTC m=+1282.288528230" Feb 18 00:55:29 crc kubenswrapper[4858]: I0218 00:55:29.007421 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.5128573039999997 podStartE2EDuration="6.007402589s" podCreationTimestamp="2026-02-18 00:55:23 +0000 UTC" firstStartedPulling="2026-02-18 00:55:24.364780051 +0000 UTC m=+1277.670616783" lastFinishedPulling="2026-02-18 00:55:27.859325336 +0000 UTC m=+1281.165162068" observedRunningTime="2026-02-18 00:55:28.987087465 +0000 UTC m=+1282.292924197" watchObservedRunningTime="2026-02-18 00:55:29.007402589 +0000 UTC m=+1282.313239321" Feb 18 00:55:29 crc kubenswrapper[4858]: I0218 00:55:29.029374 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.341165353 podStartE2EDuration="6.029352774s" podCreationTimestamp="2026-02-18 00:55:23 +0000 UTC" firstStartedPulling="2026-02-18 00:55:24.175485131 +0000 UTC m=+1277.481321863" lastFinishedPulling="2026-02-18 00:55:27.863672552 +0000 UTC m=+1281.169509284" observedRunningTime="2026-02-18 00:55:29.005525054 +0000 UTC m=+1282.311361786" watchObservedRunningTime="2026-02-18 00:55:29.029352774 +0000 UTC m=+1282.335189506" Feb 18 00:55:29 crc kubenswrapper[4858]: I0218 00:55:29.035667 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.105758249 podStartE2EDuration="7.035653507s" podCreationTimestamp="2026-02-18 00:55:22 +0000 UTC" firstStartedPulling="2026-02-18 00:55:23.929410147 +0000 UTC m=+1277.235246879" lastFinishedPulling="2026-02-18 00:55:27.859305405 +0000 UTC m=+1281.165142137" observedRunningTime="2026-02-18 00:55:29.024331871 +0000 UTC m=+1282.330168603" watchObservedRunningTime="2026-02-18 00:55:29.035653507 +0000 UTC m=+1282.341490239" Feb 18 00:55:29 crc kubenswrapper[4858]: I0218 00:55:29.506283 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 00:55:29 crc kubenswrapper[4858]: I0218 00:55:29.506725 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="9788397b-0bb7-43f9-9ac8-69b765750ecb" containerName="kube-state-metrics" containerID="cri-o://6247c2457528a5c07016e1c2d7a5d682e922d871495ccad36689af7e292de274" gracePeriod=30 Feb 18 00:55:29 crc kubenswrapper[4858]: I0218 00:55:29.977712 4858 generic.go:334] "Generic (PLEG): container finished" podID="9788397b-0bb7-43f9-9ac8-69b765750ecb" containerID="6247c2457528a5c07016e1c2d7a5d682e922d871495ccad36689af7e292de274" exitCode=2 Feb 18 00:55:29 crc kubenswrapper[4858]: I0218 00:55:29.978156 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9788397b-0bb7-43f9-9ac8-69b765750ecb","Type":"ContainerDied","Data":"6247c2457528a5c07016e1c2d7a5d682e922d871495ccad36689af7e292de274"} Feb 18 00:55:29 crc kubenswrapper[4858]: I0218 00:55:29.984708 4858 generic.go:334] "Generic (PLEG): container finished" podID="9a04f637-4965-4374-b1de-cacc188d8d37" containerID="4ee1dff0932c45668be6490e23f78bbd23135f18692c4444d73989fe070ea33d" exitCode=0 Feb 18 00:55:29 crc kubenswrapper[4858]: I0218 00:55:29.984730 4858 generic.go:334] "Generic (PLEG): container finished" podID="9a04f637-4965-4374-b1de-cacc188d8d37" containerID="a8a34f2bb958f07d799f2ea3ecddf1a9862b2c1d53e6174de283c1a87bda86ea" exitCode=143 Feb 18 00:55:29 crc kubenswrapper[4858]: I0218 00:55:29.985773 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9a04f637-4965-4374-b1de-cacc188d8d37","Type":"ContainerDied","Data":"4ee1dff0932c45668be6490e23f78bbd23135f18692c4444d73989fe070ea33d"} Feb 18 00:55:29 crc kubenswrapper[4858]: I0218 00:55:29.985829 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9a04f637-4965-4374-b1de-cacc188d8d37","Type":"ContainerDied","Data":"a8a34f2bb958f07d799f2ea3ecddf1a9862b2c1d53e6174de283c1a87bda86ea"} Feb 18 00:55:30 crc kubenswrapper[4858]: I0218 00:55:30.207908 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 00:55:30 crc kubenswrapper[4858]: I0218 00:55:30.212969 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 00:55:30 crc kubenswrapper[4858]: I0218 00:55:30.304791 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a04f637-4965-4374-b1de-cacc188d8d37-logs\") pod \"9a04f637-4965-4374-b1de-cacc188d8d37\" (UID: \"9a04f637-4965-4374-b1de-cacc188d8d37\") " Feb 18 00:55:30 crc kubenswrapper[4858]: I0218 00:55:30.304866 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jsbl\" (UniqueName: \"kubernetes.io/projected/9788397b-0bb7-43f9-9ac8-69b765750ecb-kube-api-access-9jsbl\") pod \"9788397b-0bb7-43f9-9ac8-69b765750ecb\" (UID: \"9788397b-0bb7-43f9-9ac8-69b765750ecb\") " Feb 18 00:55:30 crc kubenswrapper[4858]: I0218 00:55:30.304952 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdhgl\" (UniqueName: \"kubernetes.io/projected/9a04f637-4965-4374-b1de-cacc188d8d37-kube-api-access-qdhgl\") pod \"9a04f637-4965-4374-b1de-cacc188d8d37\" (UID: \"9a04f637-4965-4374-b1de-cacc188d8d37\") " Feb 18 00:55:30 crc kubenswrapper[4858]: I0218 00:55:30.305065 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a04f637-4965-4374-b1de-cacc188d8d37-config-data\") pod \"9a04f637-4965-4374-b1de-cacc188d8d37\" (UID: \"9a04f637-4965-4374-b1de-cacc188d8d37\") " Feb 18 00:55:30 crc kubenswrapper[4858]: I0218 00:55:30.305089 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a04f637-4965-4374-b1de-cacc188d8d37-combined-ca-bundle\") pod \"9a04f637-4965-4374-b1de-cacc188d8d37\" (UID: \"9a04f637-4965-4374-b1de-cacc188d8d37\") " Feb 18 00:55:30 crc kubenswrapper[4858]: I0218 00:55:30.306061 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a04f637-4965-4374-b1de-cacc188d8d37-logs" (OuterVolumeSpecName: "logs") pod "9a04f637-4965-4374-b1de-cacc188d8d37" (UID: "9a04f637-4965-4374-b1de-cacc188d8d37"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:55:30 crc kubenswrapper[4858]: I0218 00:55:30.342703 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a04f637-4965-4374-b1de-cacc188d8d37-kube-api-access-qdhgl" (OuterVolumeSpecName: "kube-api-access-qdhgl") pod "9a04f637-4965-4374-b1de-cacc188d8d37" (UID: "9a04f637-4965-4374-b1de-cacc188d8d37"). InnerVolumeSpecName "kube-api-access-qdhgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:55:30 crc kubenswrapper[4858]: I0218 00:55:30.358786 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9788397b-0bb7-43f9-9ac8-69b765750ecb-kube-api-access-9jsbl" (OuterVolumeSpecName: "kube-api-access-9jsbl") pod "9788397b-0bb7-43f9-9ac8-69b765750ecb" (UID: "9788397b-0bb7-43f9-9ac8-69b765750ecb"). InnerVolumeSpecName "kube-api-access-9jsbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:55:30 crc kubenswrapper[4858]: I0218 00:55:30.392648 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a04f637-4965-4374-b1de-cacc188d8d37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a04f637-4965-4374-b1de-cacc188d8d37" (UID: "9a04f637-4965-4374-b1de-cacc188d8d37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:55:30 crc kubenswrapper[4858]: I0218 00:55:30.408799 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a04f637-4965-4374-b1de-cacc188d8d37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:30 crc kubenswrapper[4858]: I0218 00:55:30.408832 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a04f637-4965-4374-b1de-cacc188d8d37-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:30 crc kubenswrapper[4858]: I0218 00:55:30.408842 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jsbl\" (UniqueName: \"kubernetes.io/projected/9788397b-0bb7-43f9-9ac8-69b765750ecb-kube-api-access-9jsbl\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:30 crc kubenswrapper[4858]: I0218 00:55:30.408851 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdhgl\" (UniqueName: \"kubernetes.io/projected/9a04f637-4965-4374-b1de-cacc188d8d37-kube-api-access-qdhgl\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:30 crc kubenswrapper[4858]: I0218 00:55:30.447730 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a04f637-4965-4374-b1de-cacc188d8d37-config-data" (OuterVolumeSpecName: "config-data") pod "9a04f637-4965-4374-b1de-cacc188d8d37" (UID: "9a04f637-4965-4374-b1de-cacc188d8d37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:55:30 crc kubenswrapper[4858]: I0218 00:55:30.510932 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a04f637-4965-4374-b1de-cacc188d8d37-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.002881 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9788397b-0bb7-43f9-9ac8-69b765750ecb","Type":"ContainerDied","Data":"d4e22b2b7c18ba4c75ee11324ba5c7879101602bb721e9f5336bd3bd24e1663c"} Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.003301 4858 scope.go:117] "RemoveContainer" containerID="6247c2457528a5c07016e1c2d7a5d682e922d871495ccad36689af7e292de274" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.002993 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.012487 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.014568 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9a04f637-4965-4374-b1de-cacc188d8d37","Type":"ContainerDied","Data":"90fc022df4f1ebe06c19e856b160b7c4b0b7a5b9da2833f07073d6869914fbe7"} Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.068190 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.076608 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.079388 4858 scope.go:117] "RemoveContainer" containerID="4ee1dff0932c45668be6490e23f78bbd23135f18692c4444d73989fe070ea33d" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.089303 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.098027 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.128798 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 00:55:31 crc kubenswrapper[4858]: E0218 00:55:31.129272 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a04f637-4965-4374-b1de-cacc188d8d37" containerName="nova-metadata-metadata" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.129291 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a04f637-4965-4374-b1de-cacc188d8d37" containerName="nova-metadata-metadata" Feb 18 00:55:31 crc kubenswrapper[4858]: E0218 00:55:31.129304 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a04f637-4965-4374-b1de-cacc188d8d37" containerName="nova-metadata-log" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.129310 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a04f637-4965-4374-b1de-cacc188d8d37" containerName="nova-metadata-log" Feb 18 00:55:31 crc kubenswrapper[4858]: E0218 00:55:31.129322 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9788397b-0bb7-43f9-9ac8-69b765750ecb" containerName="kube-state-metrics" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.129329 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9788397b-0bb7-43f9-9ac8-69b765750ecb" containerName="kube-state-metrics" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.129555 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a04f637-4965-4374-b1de-cacc188d8d37" containerName="nova-metadata-log" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.129583 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a04f637-4965-4374-b1de-cacc188d8d37" containerName="nova-metadata-metadata" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.129594 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9788397b-0bb7-43f9-9ac8-69b765750ecb" containerName="kube-state-metrics" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.130343 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.132447 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.136885 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.137187 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.137695 4858 scope.go:117] "RemoveContainer" containerID="a8a34f2bb958f07d799f2ea3ecddf1a9862b2c1d53e6174de283c1a87bda86ea" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.177559 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.179312 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.182174 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.182603 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.196156 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.225706 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfnnz\" (UniqueName: \"kubernetes.io/projected/d212b736-c8c8-43a3-923d-098fe3a06a6b-kube-api-access-dfnnz\") pod \"kube-state-metrics-0\" (UID: \"d212b736-c8c8-43a3-923d-098fe3a06a6b\") " pod="openstack/kube-state-metrics-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.225793 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d212b736-c8c8-43a3-923d-098fe3a06a6b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d212b736-c8c8-43a3-923d-098fe3a06a6b\") " pod="openstack/kube-state-metrics-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.225823 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d212b736-c8c8-43a3-923d-098fe3a06a6b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d212b736-c8c8-43a3-923d-098fe3a06a6b\") " pod="openstack/kube-state-metrics-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.225892 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d212b736-c8c8-43a3-923d-098fe3a06a6b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d212b736-c8c8-43a3-923d-098fe3a06a6b\") " pod="openstack/kube-state-metrics-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.327323 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d212b736-c8c8-43a3-923d-098fe3a06a6b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d212b736-c8c8-43a3-923d-098fe3a06a6b\") " pod="openstack/kube-state-metrics-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.327487 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e946aa8f-18ab-49c1-9f17-59d5264e7c9d\") " pod="openstack/nova-metadata-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.327545 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfnnz\" (UniqueName: \"kubernetes.io/projected/d212b736-c8c8-43a3-923d-098fe3a06a6b-kube-api-access-dfnnz\") pod \"kube-state-metrics-0\" (UID: \"d212b736-c8c8-43a3-923d-098fe3a06a6b\") " pod="openstack/kube-state-metrics-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.327616 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e946aa8f-18ab-49c1-9f17-59d5264e7c9d\") " pod="openstack/nova-metadata-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.327642 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d212b736-c8c8-43a3-923d-098fe3a06a6b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d212b736-c8c8-43a3-923d-098fe3a06a6b\") " pod="openstack/kube-state-metrics-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.327662 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-config-data\") pod \"nova-metadata-0\" (UID: \"e946aa8f-18ab-49c1-9f17-59d5264e7c9d\") " pod="openstack/nova-metadata-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.327685 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d212b736-c8c8-43a3-923d-098fe3a06a6b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d212b736-c8c8-43a3-923d-098fe3a06a6b\") " pod="openstack/kube-state-metrics-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.327717 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttmbn\" (UniqueName: \"kubernetes.io/projected/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-kube-api-access-ttmbn\") pod \"nova-metadata-0\" (UID: \"e946aa8f-18ab-49c1-9f17-59d5264e7c9d\") " pod="openstack/nova-metadata-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.327743 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-logs\") pod \"nova-metadata-0\" (UID: \"e946aa8f-18ab-49c1-9f17-59d5264e7c9d\") " pod="openstack/nova-metadata-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.332115 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d212b736-c8c8-43a3-923d-098fe3a06a6b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d212b736-c8c8-43a3-923d-098fe3a06a6b\") " pod="openstack/kube-state-metrics-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.339167 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d212b736-c8c8-43a3-923d-098fe3a06a6b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d212b736-c8c8-43a3-923d-098fe3a06a6b\") " pod="openstack/kube-state-metrics-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.340417 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d212b736-c8c8-43a3-923d-098fe3a06a6b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d212b736-c8c8-43a3-923d-098fe3a06a6b\") " pod="openstack/kube-state-metrics-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.355180 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfnnz\" (UniqueName: \"kubernetes.io/projected/d212b736-c8c8-43a3-923d-098fe3a06a6b-kube-api-access-dfnnz\") pod \"kube-state-metrics-0\" (UID: \"d212b736-c8c8-43a3-923d-098fe3a06a6b\") " pod="openstack/kube-state-metrics-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.430119 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e946aa8f-18ab-49c1-9f17-59d5264e7c9d\") " pod="openstack/nova-metadata-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.430189 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e946aa8f-18ab-49c1-9f17-59d5264e7c9d\") " pod="openstack/nova-metadata-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.430216 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-config-data\") pod \"nova-metadata-0\" (UID: \"e946aa8f-18ab-49c1-9f17-59d5264e7c9d\") " pod="openstack/nova-metadata-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.430256 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttmbn\" (UniqueName: \"kubernetes.io/projected/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-kube-api-access-ttmbn\") pod \"nova-metadata-0\" (UID: \"e946aa8f-18ab-49c1-9f17-59d5264e7c9d\") " pod="openstack/nova-metadata-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.430278 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-logs\") pod \"nova-metadata-0\" (UID: \"e946aa8f-18ab-49c1-9f17-59d5264e7c9d\") " pod="openstack/nova-metadata-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.430718 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-logs\") pod \"nova-metadata-0\" (UID: \"e946aa8f-18ab-49c1-9f17-59d5264e7c9d\") " pod="openstack/nova-metadata-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.432683 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9788397b-0bb7-43f9-9ac8-69b765750ecb" path="/var/lib/kubelet/pods/9788397b-0bb7-43f9-9ac8-69b765750ecb/volumes" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.433264 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e946aa8f-18ab-49c1-9f17-59d5264e7c9d\") " pod="openstack/nova-metadata-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.434689 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a04f637-4965-4374-b1de-cacc188d8d37" path="/var/lib/kubelet/pods/9a04f637-4965-4374-b1de-cacc188d8d37/volumes" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.435773 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e946aa8f-18ab-49c1-9f17-59d5264e7c9d\") " pod="openstack/nova-metadata-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.435991 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-config-data\") pod \"nova-metadata-0\" (UID: \"e946aa8f-18ab-49c1-9f17-59d5264e7c9d\") " pod="openstack/nova-metadata-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.449426 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttmbn\" (UniqueName: \"kubernetes.io/projected/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-kube-api-access-ttmbn\") pod \"nova-metadata-0\" (UID: \"e946aa8f-18ab-49c1-9f17-59d5264e7c9d\") " pod="openstack/nova-metadata-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.449988 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.508403 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 00:55:31 crc kubenswrapper[4858]: I0218 00:55:31.932035 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 00:55:32 crc kubenswrapper[4858]: I0218 00:55:32.039958 4858 generic.go:334] "Generic (PLEG): container finished" podID="570680e8-0b24-4814-a4ea-7f70e5ed1622" containerID="e6b137dfe882a81d6e61324362a90a5575dfd56132d45fe40921a53ddb6d76ce" exitCode=0 Feb 18 00:55:32 crc kubenswrapper[4858]: I0218 00:55:32.040069 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9xtk4" event={"ID":"570680e8-0b24-4814-a4ea-7f70e5ed1622","Type":"ContainerDied","Data":"e6b137dfe882a81d6e61324362a90a5575dfd56132d45fe40921a53ddb6d76ce"} Feb 18 00:55:32 crc kubenswrapper[4858]: I0218 00:55:32.042478 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d212b736-c8c8-43a3-923d-098fe3a06a6b","Type":"ContainerStarted","Data":"6f7352d109597d810a288655ad88c7b8c1b42d545a77ec13cf28f4173b762fb0"} Feb 18 00:55:32 crc kubenswrapper[4858]: I0218 00:55:32.045958 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:55:32 crc kubenswrapper[4858]: W0218 00:55:32.053743 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode946aa8f_18ab_49c1_9f17_59d5264e7c9d.slice/crio-514f50c867a47a0511668a3256efb03f5f2397526aba48c86db0db03d7d96a7a WatchSource:0}: Error finding container 514f50c867a47a0511668a3256efb03f5f2397526aba48c86db0db03d7d96a7a: Status 404 returned error can't find the container with id 514f50c867a47a0511668a3256efb03f5f2397526aba48c86db0db03d7d96a7a Feb 18 00:55:32 crc kubenswrapper[4858]: I0218 00:55:32.132784 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:55:32 crc kubenswrapper[4858]: I0218 00:55:32.133176 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="933ab558-a1b3-4850-881b-54a3a69d7320" containerName="proxy-httpd" containerID="cri-o://37641988932c91c256173c2c3a73abc13f806d720f3c5bf7b130ee5861292ce8" gracePeriod=30 Feb 18 00:55:32 crc kubenswrapper[4858]: I0218 00:55:32.133227 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="933ab558-a1b3-4850-881b-54a3a69d7320" containerName="sg-core" containerID="cri-o://b092bd32c7d61bb65bee43a332b7edb27092685bb74af180c97dfaec05480659" gracePeriod=30 Feb 18 00:55:32 crc kubenswrapper[4858]: I0218 00:55:32.133230 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="933ab558-a1b3-4850-881b-54a3a69d7320" containerName="ceilometer-notification-agent" containerID="cri-o://958cecfc10bc441162b2670beb01b3f29f70a984371cf361ac3973f7933cf3a4" gracePeriod=30 Feb 18 00:55:32 crc kubenswrapper[4858]: I0218 00:55:32.133458 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="933ab558-a1b3-4850-881b-54a3a69d7320" containerName="ceilometer-central-agent" containerID="cri-o://cad9dfd7299e69b45e4a2be38b0d1b817d7f66fd205b1f209679563e106f6934" gracePeriod=30 Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.055242 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d212b736-c8c8-43a3-923d-098fe3a06a6b","Type":"ContainerStarted","Data":"91f304523ae71e5b7056a63cef3f299aecb87a7c41be011b5b426d2ae41bb9b7"} Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.055710 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.058750 4858 generic.go:334] "Generic (PLEG): container finished" podID="933ab558-a1b3-4850-881b-54a3a69d7320" containerID="37641988932c91c256173c2c3a73abc13f806d720f3c5bf7b130ee5861292ce8" exitCode=0 Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.058772 4858 generic.go:334] "Generic (PLEG): container finished" podID="933ab558-a1b3-4850-881b-54a3a69d7320" containerID="b092bd32c7d61bb65bee43a332b7edb27092685bb74af180c97dfaec05480659" exitCode=2 Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.058780 4858 generic.go:334] "Generic (PLEG): container finished" podID="933ab558-a1b3-4850-881b-54a3a69d7320" containerID="cad9dfd7299e69b45e4a2be38b0d1b817d7f66fd205b1f209679563e106f6934" exitCode=0 Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.058820 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"933ab558-a1b3-4850-881b-54a3a69d7320","Type":"ContainerDied","Data":"37641988932c91c256173c2c3a73abc13f806d720f3c5bf7b130ee5861292ce8"} Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.058842 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"933ab558-a1b3-4850-881b-54a3a69d7320","Type":"ContainerDied","Data":"b092bd32c7d61bb65bee43a332b7edb27092685bb74af180c97dfaec05480659"} Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.058853 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"933ab558-a1b3-4850-881b-54a3a69d7320","Type":"ContainerDied","Data":"cad9dfd7299e69b45e4a2be38b0d1b817d7f66fd205b1f209679563e106f6934"} Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.060416 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e946aa8f-18ab-49c1-9f17-59d5264e7c9d","Type":"ContainerStarted","Data":"19b8752491890ccd444012ac6cfc99ac809e17de6a6626ae232ece63dde9c074"} Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.060561 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e946aa8f-18ab-49c1-9f17-59d5264e7c9d","Type":"ContainerStarted","Data":"a5ead71c8e5cd49b7b4da6b93ce26a957dc926d0bb43b7504382fca3e6490a03"} Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.060643 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e946aa8f-18ab-49c1-9f17-59d5264e7c9d","Type":"ContainerStarted","Data":"514f50c867a47a0511668a3256efb03f5f2397526aba48c86db0db03d7d96a7a"} Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.086264 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.632065184 podStartE2EDuration="2.086248726s" podCreationTimestamp="2026-02-18 00:55:31 +0000 UTC" firstStartedPulling="2026-02-18 00:55:31.929936083 +0000 UTC m=+1285.235772815" lastFinishedPulling="2026-02-18 00:55:32.384119625 +0000 UTC m=+1285.689956357" observedRunningTime="2026-02-18 00:55:33.078786184 +0000 UTC m=+1286.384622916" watchObservedRunningTime="2026-02-18 00:55:33.086248726 +0000 UTC m=+1286.392085458" Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.105097 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.105075615 podStartE2EDuration="2.105075615s" podCreationTimestamp="2026-02-18 00:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:55:33.095886141 +0000 UTC m=+1286.401722873" watchObservedRunningTime="2026-02-18 00:55:33.105075615 +0000 UTC m=+1286.410912347" Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.247156 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.247207 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.414663 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.415348 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.451422 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.595477 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.672998 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78cd565959-bnkqb" Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.706353 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9xtk4" Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.733637 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-mb9mk"] Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.733993 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" podUID="274393d7-4826-441f-b03e-496f8b30d14f" containerName="dnsmasq-dns" containerID="cri-o://d58a3ee463deacbe350aade08b64d36ca9b81bacf2992b00b8188d8ccd14ae40" gracePeriod=10 Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.886970 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv9qf\" (UniqueName: \"kubernetes.io/projected/570680e8-0b24-4814-a4ea-7f70e5ed1622-kube-api-access-tv9qf\") pod \"570680e8-0b24-4814-a4ea-7f70e5ed1622\" (UID: \"570680e8-0b24-4814-a4ea-7f70e5ed1622\") " Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.887231 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/570680e8-0b24-4814-a4ea-7f70e5ed1622-scripts\") pod \"570680e8-0b24-4814-a4ea-7f70e5ed1622\" (UID: \"570680e8-0b24-4814-a4ea-7f70e5ed1622\") " Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.887249 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570680e8-0b24-4814-a4ea-7f70e5ed1622-combined-ca-bundle\") pod \"570680e8-0b24-4814-a4ea-7f70e5ed1622\" (UID: \"570680e8-0b24-4814-a4ea-7f70e5ed1622\") " Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.887315 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570680e8-0b24-4814-a4ea-7f70e5ed1622-config-data\") pod \"570680e8-0b24-4814-a4ea-7f70e5ed1622\" (UID: \"570680e8-0b24-4814-a4ea-7f70e5ed1622\") " Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.896777 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/570680e8-0b24-4814-a4ea-7f70e5ed1622-scripts" (OuterVolumeSpecName: "scripts") pod "570680e8-0b24-4814-a4ea-7f70e5ed1622" (UID: "570680e8-0b24-4814-a4ea-7f70e5ed1622"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.897726 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/570680e8-0b24-4814-a4ea-7f70e5ed1622-kube-api-access-tv9qf" (OuterVolumeSpecName: "kube-api-access-tv9qf") pod "570680e8-0b24-4814-a4ea-7f70e5ed1622" (UID: "570680e8-0b24-4814-a4ea-7f70e5ed1622"). InnerVolumeSpecName "kube-api-access-tv9qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.925006 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/570680e8-0b24-4814-a4ea-7f70e5ed1622-config-data" (OuterVolumeSpecName: "config-data") pod "570680e8-0b24-4814-a4ea-7f70e5ed1622" (UID: "570680e8-0b24-4814-a4ea-7f70e5ed1622"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.934348 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/570680e8-0b24-4814-a4ea-7f70e5ed1622-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "570680e8-0b24-4814-a4ea-7f70e5ed1622" (UID: "570680e8-0b24-4814-a4ea-7f70e5ed1622"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.989169 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/570680e8-0b24-4814-a4ea-7f70e5ed1622-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.989206 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570680e8-0b24-4814-a4ea-7f70e5ed1622-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.989220 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570680e8-0b24-4814-a4ea-7f70e5ed1622-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:33 crc kubenswrapper[4858]: I0218 00:55:33.989230 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv9qf\" (UniqueName: \"kubernetes.io/projected/570680e8-0b24-4814-a4ea-7f70e5ed1622-kube-api-access-tv9qf\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.001054 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.091163 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/933ab558-a1b3-4850-881b-54a3a69d7320-config-data\") pod \"933ab558-a1b3-4850-881b-54a3a69d7320\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.091214 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/933ab558-a1b3-4850-881b-54a3a69d7320-run-httpd\") pod \"933ab558-a1b3-4850-881b-54a3a69d7320\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.091246 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/933ab558-a1b3-4850-881b-54a3a69d7320-scripts\") pod \"933ab558-a1b3-4850-881b-54a3a69d7320\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.091316 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24884\" (UniqueName: \"kubernetes.io/projected/933ab558-a1b3-4850-881b-54a3a69d7320-kube-api-access-24884\") pod \"933ab558-a1b3-4850-881b-54a3a69d7320\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.091382 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/933ab558-a1b3-4850-881b-54a3a69d7320-sg-core-conf-yaml\") pod \"933ab558-a1b3-4850-881b-54a3a69d7320\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.091410 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/933ab558-a1b3-4850-881b-54a3a69d7320-combined-ca-bundle\") pod \"933ab558-a1b3-4850-881b-54a3a69d7320\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.091459 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/933ab558-a1b3-4850-881b-54a3a69d7320-log-httpd\") pod \"933ab558-a1b3-4850-881b-54a3a69d7320\" (UID: \"933ab558-a1b3-4850-881b-54a3a69d7320\") " Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.091665 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/933ab558-a1b3-4850-881b-54a3a69d7320-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "933ab558-a1b3-4850-881b-54a3a69d7320" (UID: "933ab558-a1b3-4850-881b-54a3a69d7320"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.091962 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/933ab558-a1b3-4850-881b-54a3a69d7320-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.092263 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/933ab558-a1b3-4850-881b-54a3a69d7320-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "933ab558-a1b3-4850-881b-54a3a69d7320" (UID: "933ab558-a1b3-4850-881b-54a3a69d7320"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.099622 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/933ab558-a1b3-4850-881b-54a3a69d7320-scripts" (OuterVolumeSpecName: "scripts") pod "933ab558-a1b3-4850-881b-54a3a69d7320" (UID: "933ab558-a1b3-4850-881b-54a3a69d7320"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.100867 4858 generic.go:334] "Generic (PLEG): container finished" podID="274393d7-4826-441f-b03e-496f8b30d14f" containerID="d58a3ee463deacbe350aade08b64d36ca9b81bacf2992b00b8188d8ccd14ae40" exitCode=0 Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.100951 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" event={"ID":"274393d7-4826-441f-b03e-496f8b30d14f","Type":"ContainerDied","Data":"d58a3ee463deacbe350aade08b64d36ca9b81bacf2992b00b8188d8ccd14ae40"} Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.101679 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/933ab558-a1b3-4850-881b-54a3a69d7320-kube-api-access-24884" (OuterVolumeSpecName: "kube-api-access-24884") pod "933ab558-a1b3-4850-881b-54a3a69d7320" (UID: "933ab558-a1b3-4850-881b-54a3a69d7320"). InnerVolumeSpecName "kube-api-access-24884". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.110740 4858 generic.go:334] "Generic (PLEG): container finished" podID="0634c49e-271a-4c92-8313-d974f58cd273" containerID="5dd29d6ab0f5291c6b919cd6b75c1064b0972c5f019703afcf6f4f1952ee5c1a" exitCode=0 Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.110826 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qxsd9" event={"ID":"0634c49e-271a-4c92-8313-d974f58cd273","Type":"ContainerDied","Data":"5dd29d6ab0f5291c6b919cd6b75c1064b0972c5f019703afcf6f4f1952ee5c1a"} Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.123674 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9xtk4" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.124407 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9xtk4" event={"ID":"570680e8-0b24-4814-a4ea-7f70e5ed1622","Type":"ContainerDied","Data":"d0b6ce363700e3fdcd7bc3c1eb9e160891b5888440f7b0a4908da354d4e4f475"} Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.124437 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0b6ce363700e3fdcd7bc3c1eb9e160891b5888440f7b0a4908da354d4e4f475" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.190233 4858 generic.go:334] "Generic (PLEG): container finished" podID="933ab558-a1b3-4850-881b-54a3a69d7320" containerID="958cecfc10bc441162b2670beb01b3f29f70a984371cf361ac3973f7933cf3a4" exitCode=0 Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.191550 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.192153 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"933ab558-a1b3-4850-881b-54a3a69d7320","Type":"ContainerDied","Data":"958cecfc10bc441162b2670beb01b3f29f70a984371cf361ac3973f7933cf3a4"} Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.192183 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"933ab558-a1b3-4850-881b-54a3a69d7320","Type":"ContainerDied","Data":"41ae1df0fdb6ae7e5c28e2dde3cc15f262bca9394db82a2dbb05d25e9234c674"} Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.192202 4858 scope.go:117] "RemoveContainer" containerID="37641988932c91c256173c2c3a73abc13f806d720f3c5bf7b130ee5861292ce8" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.196182 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/933ab558-a1b3-4850-881b-54a3a69d7320-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.196204 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24884\" (UniqueName: \"kubernetes.io/projected/933ab558-a1b3-4850-881b-54a3a69d7320-kube-api-access-24884\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.196216 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/933ab558-a1b3-4850-881b-54a3a69d7320-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.205056 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.205190 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/933ab558-a1b3-4850-881b-54a3a69d7320-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "933ab558-a1b3-4850-881b-54a3a69d7320" (UID: "933ab558-a1b3-4850-881b-54a3a69d7320"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.205219 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="694d16d9-f59c-48f2-90a6-3994846c0ca5" containerName="nova-api-log" containerID="cri-o://0c413792cf8b06a52fe996e2ee2a09a962fe31b669ff48cb9cde9f2d6677ed6c" gracePeriod=30 Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.206182 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="694d16d9-f59c-48f2-90a6-3994846c0ca5" containerName="nova-api-api" containerID="cri-o://2c445a07d703735417845b3184855bc07fec1ad35ca110c23d0c875b0fce62e2" gracePeriod=30 Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.217906 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="694d16d9-f59c-48f2-90a6-3994846c0ca5" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": EOF" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.218085 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="694d16d9-f59c-48f2-90a6-3994846c0ca5" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": EOF" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.259264 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/933ab558-a1b3-4850-881b-54a3a69d7320-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "933ab558-a1b3-4850-881b-54a3a69d7320" (UID: "933ab558-a1b3-4850-881b-54a3a69d7320"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.273655 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.275156 4858 scope.go:117] "RemoveContainer" containerID="b092bd32c7d61bb65bee43a332b7edb27092685bb74af180c97dfaec05480659" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.310195 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/933ab558-a1b3-4850-881b-54a3a69d7320-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.310228 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/933ab558-a1b3-4850-881b-54a3a69d7320-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.318947 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.329441 4858 scope.go:117] "RemoveContainer" containerID="958cecfc10bc441162b2670beb01b3f29f70a984371cf361ac3973f7933cf3a4" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.331331 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.366650 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/933ab558-a1b3-4850-881b-54a3a69d7320-config-data" (OuterVolumeSpecName: "config-data") pod "933ab558-a1b3-4850-881b-54a3a69d7320" (UID: "933ab558-a1b3-4850-881b-54a3a69d7320"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.369694 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.376024 4858 scope.go:117] "RemoveContainer" containerID="cad9dfd7299e69b45e4a2be38b0d1b817d7f66fd205b1f209679563e106f6934" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.417829 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/933ab558-a1b3-4850-881b-54a3a69d7320-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.417950 4858 scope.go:117] "RemoveContainer" containerID="37641988932c91c256173c2c3a73abc13f806d720f3c5bf7b130ee5861292ce8" Feb 18 00:55:34 crc kubenswrapper[4858]: E0218 00:55:34.423764 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37641988932c91c256173c2c3a73abc13f806d720f3c5bf7b130ee5861292ce8\": container with ID starting with 37641988932c91c256173c2c3a73abc13f806d720f3c5bf7b130ee5861292ce8 not found: ID does not exist" containerID="37641988932c91c256173c2c3a73abc13f806d720f3c5bf7b130ee5861292ce8" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.423804 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37641988932c91c256173c2c3a73abc13f806d720f3c5bf7b130ee5861292ce8"} err="failed to get container status \"37641988932c91c256173c2c3a73abc13f806d720f3c5bf7b130ee5861292ce8\": rpc error: code = NotFound desc = could not find container \"37641988932c91c256173c2c3a73abc13f806d720f3c5bf7b130ee5861292ce8\": container with ID starting with 37641988932c91c256173c2c3a73abc13f806d720f3c5bf7b130ee5861292ce8 not found: ID does not exist" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.423829 4858 scope.go:117] "RemoveContainer" containerID="b092bd32c7d61bb65bee43a332b7edb27092685bb74af180c97dfaec05480659" Feb 18 00:55:34 crc kubenswrapper[4858]: E0218 00:55:34.425836 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b092bd32c7d61bb65bee43a332b7edb27092685bb74af180c97dfaec05480659\": container with ID starting with b092bd32c7d61bb65bee43a332b7edb27092685bb74af180c97dfaec05480659 not found: ID does not exist" containerID="b092bd32c7d61bb65bee43a332b7edb27092685bb74af180c97dfaec05480659" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.425881 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b092bd32c7d61bb65bee43a332b7edb27092685bb74af180c97dfaec05480659"} err="failed to get container status \"b092bd32c7d61bb65bee43a332b7edb27092685bb74af180c97dfaec05480659\": rpc error: code = NotFound desc = could not find container \"b092bd32c7d61bb65bee43a332b7edb27092685bb74af180c97dfaec05480659\": container with ID starting with b092bd32c7d61bb65bee43a332b7edb27092685bb74af180c97dfaec05480659 not found: ID does not exist" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.425906 4858 scope.go:117] "RemoveContainer" containerID="958cecfc10bc441162b2670beb01b3f29f70a984371cf361ac3973f7933cf3a4" Feb 18 00:55:34 crc kubenswrapper[4858]: E0218 00:55:34.429721 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"958cecfc10bc441162b2670beb01b3f29f70a984371cf361ac3973f7933cf3a4\": container with ID starting with 958cecfc10bc441162b2670beb01b3f29f70a984371cf361ac3973f7933cf3a4 not found: ID does not exist" containerID="958cecfc10bc441162b2670beb01b3f29f70a984371cf361ac3973f7933cf3a4" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.429789 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"958cecfc10bc441162b2670beb01b3f29f70a984371cf361ac3973f7933cf3a4"} err="failed to get container status \"958cecfc10bc441162b2670beb01b3f29f70a984371cf361ac3973f7933cf3a4\": rpc error: code = NotFound desc = could not find container \"958cecfc10bc441162b2670beb01b3f29f70a984371cf361ac3973f7933cf3a4\": container with ID starting with 958cecfc10bc441162b2670beb01b3f29f70a984371cf361ac3973f7933cf3a4 not found: ID does not exist" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.429820 4858 scope.go:117] "RemoveContainer" containerID="cad9dfd7299e69b45e4a2be38b0d1b817d7f66fd205b1f209679563e106f6934" Feb 18 00:55:34 crc kubenswrapper[4858]: E0218 00:55:34.430166 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cad9dfd7299e69b45e4a2be38b0d1b817d7f66fd205b1f209679563e106f6934\": container with ID starting with cad9dfd7299e69b45e4a2be38b0d1b817d7f66fd205b1f209679563e106f6934 not found: ID does not exist" containerID="cad9dfd7299e69b45e4a2be38b0d1b817d7f66fd205b1f209679563e106f6934" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.430187 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cad9dfd7299e69b45e4a2be38b0d1b817d7f66fd205b1f209679563e106f6934"} err="failed to get container status \"cad9dfd7299e69b45e4a2be38b0d1b817d7f66fd205b1f209679563e106f6934\": rpc error: code = NotFound desc = could not find container \"cad9dfd7299e69b45e4a2be38b0d1b817d7f66fd205b1f209679563e106f6934\": container with ID starting with cad9dfd7299e69b45e4a2be38b0d1b817d7f66fd205b1f209679563e106f6934 not found: ID does not exist" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.519817 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-ovsdbserver-sb\") pod \"274393d7-4826-441f-b03e-496f8b30d14f\" (UID: \"274393d7-4826-441f-b03e-496f8b30d14f\") " Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.519944 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5qnw\" (UniqueName: \"kubernetes.io/projected/274393d7-4826-441f-b03e-496f8b30d14f-kube-api-access-k5qnw\") pod \"274393d7-4826-441f-b03e-496f8b30d14f\" (UID: \"274393d7-4826-441f-b03e-496f8b30d14f\") " Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.520086 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-config\") pod \"274393d7-4826-441f-b03e-496f8b30d14f\" (UID: \"274393d7-4826-441f-b03e-496f8b30d14f\") " Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.520116 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-ovsdbserver-nb\") pod \"274393d7-4826-441f-b03e-496f8b30d14f\" (UID: \"274393d7-4826-441f-b03e-496f8b30d14f\") " Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.520257 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-dns-svc\") pod \"274393d7-4826-441f-b03e-496f8b30d14f\" (UID: \"274393d7-4826-441f-b03e-496f8b30d14f\") " Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.520429 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-dns-swift-storage-0\") pod \"274393d7-4826-441f-b03e-496f8b30d14f\" (UID: \"274393d7-4826-441f-b03e-496f8b30d14f\") " Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.528072 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/274393d7-4826-441f-b03e-496f8b30d14f-kube-api-access-k5qnw" (OuterVolumeSpecName: "kube-api-access-k5qnw") pod "274393d7-4826-441f-b03e-496f8b30d14f" (UID: "274393d7-4826-441f-b03e-496f8b30d14f"). InnerVolumeSpecName "kube-api-access-k5qnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.598106 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "274393d7-4826-441f-b03e-496f8b30d14f" (UID: "274393d7-4826-441f-b03e-496f8b30d14f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.617025 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-config" (OuterVolumeSpecName: "config") pod "274393d7-4826-441f-b03e-496f8b30d14f" (UID: "274393d7-4826-441f-b03e-496f8b30d14f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.622930 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "274393d7-4826-441f-b03e-496f8b30d14f" (UID: "274393d7-4826-441f-b03e-496f8b30d14f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.623280 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "274393d7-4826-441f-b03e-496f8b30d14f" (UID: "274393d7-4826-441f-b03e-496f8b30d14f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.624593 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.624625 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5qnw\" (UniqueName: \"kubernetes.io/projected/274393d7-4826-441f-b03e-496f8b30d14f-kube-api-access-k5qnw\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.624637 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.624647 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.624659 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.631410 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "274393d7-4826-441f-b03e-496f8b30d14f" (UID: "274393d7-4826-441f-b03e-496f8b30d14f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.716597 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.726081 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/274393d7-4826-441f-b03e-496f8b30d14f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.734751 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.750530 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:55:34 crc kubenswrapper[4858]: E0218 00:55:34.750907 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="933ab558-a1b3-4850-881b-54a3a69d7320" containerName="sg-core" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.750924 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="933ab558-a1b3-4850-881b-54a3a69d7320" containerName="sg-core" Feb 18 00:55:34 crc kubenswrapper[4858]: E0218 00:55:34.750936 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="933ab558-a1b3-4850-881b-54a3a69d7320" containerName="ceilometer-central-agent" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.750942 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="933ab558-a1b3-4850-881b-54a3a69d7320" containerName="ceilometer-central-agent" Feb 18 00:55:34 crc kubenswrapper[4858]: E0218 00:55:34.750954 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="274393d7-4826-441f-b03e-496f8b30d14f" containerName="init" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.750961 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="274393d7-4826-441f-b03e-496f8b30d14f" containerName="init" Feb 18 00:55:34 crc kubenswrapper[4858]: E0218 00:55:34.750971 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="570680e8-0b24-4814-a4ea-7f70e5ed1622" containerName="nova-manage" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.750977 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="570680e8-0b24-4814-a4ea-7f70e5ed1622" containerName="nova-manage" Feb 18 00:55:34 crc kubenswrapper[4858]: E0218 00:55:34.750996 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="274393d7-4826-441f-b03e-496f8b30d14f" containerName="dnsmasq-dns" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.751001 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="274393d7-4826-441f-b03e-496f8b30d14f" containerName="dnsmasq-dns" Feb 18 00:55:34 crc kubenswrapper[4858]: E0218 00:55:34.751024 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="933ab558-a1b3-4850-881b-54a3a69d7320" containerName="ceilometer-notification-agent" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.751031 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="933ab558-a1b3-4850-881b-54a3a69d7320" containerName="ceilometer-notification-agent" Feb 18 00:55:34 crc kubenswrapper[4858]: E0218 00:55:34.751058 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="933ab558-a1b3-4850-881b-54a3a69d7320" containerName="proxy-httpd" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.751064 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="933ab558-a1b3-4850-881b-54a3a69d7320" containerName="proxy-httpd" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.751231 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="570680e8-0b24-4814-a4ea-7f70e5ed1622" containerName="nova-manage" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.751247 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="274393d7-4826-441f-b03e-496f8b30d14f" containerName="dnsmasq-dns" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.751256 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="933ab558-a1b3-4850-881b-54a3a69d7320" containerName="ceilometer-central-agent" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.751272 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="933ab558-a1b3-4850-881b-54a3a69d7320" containerName="sg-core" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.751280 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="933ab558-a1b3-4850-881b-54a3a69d7320" containerName="ceilometer-notification-agent" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.751291 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="933ab558-a1b3-4850-881b-54a3a69d7320" containerName="proxy-httpd" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.752979 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.757739 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.758018 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.758030 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.771017 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.929662 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-config-data\") pod \"ceilometer-0\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " pod="openstack/ceilometer-0" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.929730 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/059e53fe-a613-4e7e-99aa-81c491f09a5a-log-httpd\") pod \"ceilometer-0\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " pod="openstack/ceilometer-0" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.929800 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/059e53fe-a613-4e7e-99aa-81c491f09a5a-run-httpd\") pod \"ceilometer-0\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " pod="openstack/ceilometer-0" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.929827 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-scripts\") pod \"ceilometer-0\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " pod="openstack/ceilometer-0" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.929855 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " pod="openstack/ceilometer-0" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.930015 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " pod="openstack/ceilometer-0" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.930221 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " pod="openstack/ceilometer-0" Feb 18 00:55:34 crc kubenswrapper[4858]: I0218 00:55:34.930277 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz82q\" (UniqueName: \"kubernetes.io/projected/059e53fe-a613-4e7e-99aa-81c491f09a5a-kube-api-access-dz82q\") pod \"ceilometer-0\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " pod="openstack/ceilometer-0" Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.032204 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/059e53fe-a613-4e7e-99aa-81c491f09a5a-run-httpd\") pod \"ceilometer-0\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " pod="openstack/ceilometer-0" Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.032245 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-scripts\") pod \"ceilometer-0\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " pod="openstack/ceilometer-0" Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.032266 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " pod="openstack/ceilometer-0" Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.032311 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " pod="openstack/ceilometer-0" Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.032372 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " pod="openstack/ceilometer-0" Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.032397 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz82q\" (UniqueName: \"kubernetes.io/projected/059e53fe-a613-4e7e-99aa-81c491f09a5a-kube-api-access-dz82q\") pod \"ceilometer-0\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " pod="openstack/ceilometer-0" Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.032417 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-config-data\") pod \"ceilometer-0\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " pod="openstack/ceilometer-0" Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.032453 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/059e53fe-a613-4e7e-99aa-81c491f09a5a-log-httpd\") pod \"ceilometer-0\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " pod="openstack/ceilometer-0" Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.032749 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/059e53fe-a613-4e7e-99aa-81c491f09a5a-run-httpd\") pod \"ceilometer-0\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " pod="openstack/ceilometer-0" Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.032817 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/059e53fe-a613-4e7e-99aa-81c491f09a5a-log-httpd\") pod \"ceilometer-0\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " pod="openstack/ceilometer-0" Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.037091 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " pod="openstack/ceilometer-0" Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.038735 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-scripts\") pod \"ceilometer-0\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " pod="openstack/ceilometer-0" Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.041090 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-config-data\") pod \"ceilometer-0\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " pod="openstack/ceilometer-0" Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.041751 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " pod="openstack/ceilometer-0" Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.047924 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " pod="openstack/ceilometer-0" Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.056839 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz82q\" (UniqueName: \"kubernetes.io/projected/059e53fe-a613-4e7e-99aa-81c491f09a5a-kube-api-access-dz82q\") pod \"ceilometer-0\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " pod="openstack/ceilometer-0" Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.067319 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.215325 4858 generic.go:334] "Generic (PLEG): container finished" podID="694d16d9-f59c-48f2-90a6-3994846c0ca5" containerID="0c413792cf8b06a52fe996e2ee2a09a962fe31b669ff48cb9cde9f2d6677ed6c" exitCode=143 Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.215395 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"694d16d9-f59c-48f2-90a6-3994846c0ca5","Type":"ContainerDied","Data":"0c413792cf8b06a52fe996e2ee2a09a962fe31b669ff48cb9cde9f2d6677ed6c"} Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.217857 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.217919 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-mb9mk" event={"ID":"274393d7-4826-441f-b03e-496f8b30d14f","Type":"ContainerDied","Data":"845c8b77f357f3da39ea9b97bf6f6b03262fd80bf6de7a1ea273df631887be5a"} Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.217943 4858 scope.go:117] "RemoveContainer" containerID="d58a3ee463deacbe350aade08b64d36ca9b81bacf2992b00b8188d8ccd14ae40" Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.218626 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e946aa8f-18ab-49c1-9f17-59d5264e7c9d" containerName="nova-metadata-log" containerID="cri-o://a5ead71c8e5cd49b7b4da6b93ce26a957dc926d0bb43b7504382fca3e6490a03" gracePeriod=30 Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.218721 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e946aa8f-18ab-49c1-9f17-59d5264e7c9d" containerName="nova-metadata-metadata" containerID="cri-o://19b8752491890ccd444012ac6cfc99ac809e17de6a6626ae232ece63dde9c074" gracePeriod=30 Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.266230 4858 scope.go:117] "RemoveContainer" containerID="fb85683809997cc252d73dcfba44cda7d39488af4a553a511b5193b263ecb42f" Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.284273 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-mb9mk"] Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.294988 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-mb9mk"] Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.454996 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="274393d7-4826-441f-b03e-496f8b30d14f" path="/var/lib/kubelet/pods/274393d7-4826-441f-b03e-496f8b30d14f/volumes" Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.455909 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="933ab558-a1b3-4850-881b-54a3a69d7320" path="/var/lib/kubelet/pods/933ab558-a1b3-4850-881b-54a3a69d7320/volumes" Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.599534 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.820585 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qxsd9" Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.962476 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.991611 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0634c49e-271a-4c92-8313-d974f58cd273-config-data\") pod \"0634c49e-271a-4c92-8313-d974f58cd273\" (UID: \"0634c49e-271a-4c92-8313-d974f58cd273\") " Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.991746 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0634c49e-271a-4c92-8313-d974f58cd273-combined-ca-bundle\") pod \"0634c49e-271a-4c92-8313-d974f58cd273\" (UID: \"0634c49e-271a-4c92-8313-d974f58cd273\") " Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.991845 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcb6x\" (UniqueName: \"kubernetes.io/projected/0634c49e-271a-4c92-8313-d974f58cd273-kube-api-access-qcb6x\") pod \"0634c49e-271a-4c92-8313-d974f58cd273\" (UID: \"0634c49e-271a-4c92-8313-d974f58cd273\") " Feb 18 00:55:35 crc kubenswrapper[4858]: I0218 00:55:35.991937 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0634c49e-271a-4c92-8313-d974f58cd273-scripts\") pod \"0634c49e-271a-4c92-8313-d974f58cd273\" (UID: \"0634c49e-271a-4c92-8313-d974f58cd273\") " Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.000644 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0634c49e-271a-4c92-8313-d974f58cd273-kube-api-access-qcb6x" (OuterVolumeSpecName: "kube-api-access-qcb6x") pod "0634c49e-271a-4c92-8313-d974f58cd273" (UID: "0634c49e-271a-4c92-8313-d974f58cd273"). InnerVolumeSpecName "kube-api-access-qcb6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.000724 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0634c49e-271a-4c92-8313-d974f58cd273-scripts" (OuterVolumeSpecName: "scripts") pod "0634c49e-271a-4c92-8313-d974f58cd273" (UID: "0634c49e-271a-4c92-8313-d974f58cd273"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.033618 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0634c49e-271a-4c92-8313-d974f58cd273-config-data" (OuterVolumeSpecName: "config-data") pod "0634c49e-271a-4c92-8313-d974f58cd273" (UID: "0634c49e-271a-4c92-8313-d974f58cd273"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.049579 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0634c49e-271a-4c92-8313-d974f58cd273-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0634c49e-271a-4c92-8313-d974f58cd273" (UID: "0634c49e-271a-4c92-8313-d974f58cd273"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.093678 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-config-data\") pod \"e946aa8f-18ab-49c1-9f17-59d5264e7c9d\" (UID: \"e946aa8f-18ab-49c1-9f17-59d5264e7c9d\") " Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.093720 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-combined-ca-bundle\") pod \"e946aa8f-18ab-49c1-9f17-59d5264e7c9d\" (UID: \"e946aa8f-18ab-49c1-9f17-59d5264e7c9d\") " Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.093786 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttmbn\" (UniqueName: \"kubernetes.io/projected/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-kube-api-access-ttmbn\") pod \"e946aa8f-18ab-49c1-9f17-59d5264e7c9d\" (UID: \"e946aa8f-18ab-49c1-9f17-59d5264e7c9d\") " Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.093862 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-logs\") pod \"e946aa8f-18ab-49c1-9f17-59d5264e7c9d\" (UID: \"e946aa8f-18ab-49c1-9f17-59d5264e7c9d\") " Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.093953 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-nova-metadata-tls-certs\") pod \"e946aa8f-18ab-49c1-9f17-59d5264e7c9d\" (UID: \"e946aa8f-18ab-49c1-9f17-59d5264e7c9d\") " Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.094367 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcb6x\" (UniqueName: \"kubernetes.io/projected/0634c49e-271a-4c92-8313-d974f58cd273-kube-api-access-qcb6x\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.094381 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0634c49e-271a-4c92-8313-d974f58cd273-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.094390 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0634c49e-271a-4c92-8313-d974f58cd273-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.094398 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0634c49e-271a-4c92-8313-d974f58cd273-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.095242 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-logs" (OuterVolumeSpecName: "logs") pod "e946aa8f-18ab-49c1-9f17-59d5264e7c9d" (UID: "e946aa8f-18ab-49c1-9f17-59d5264e7c9d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.097583 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-kube-api-access-ttmbn" (OuterVolumeSpecName: "kube-api-access-ttmbn") pod "e946aa8f-18ab-49c1-9f17-59d5264e7c9d" (UID: "e946aa8f-18ab-49c1-9f17-59d5264e7c9d"). InnerVolumeSpecName "kube-api-access-ttmbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.122609 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-config-data" (OuterVolumeSpecName: "config-data") pod "e946aa8f-18ab-49c1-9f17-59d5264e7c9d" (UID: "e946aa8f-18ab-49c1-9f17-59d5264e7c9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.127289 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e946aa8f-18ab-49c1-9f17-59d5264e7c9d" (UID: "e946aa8f-18ab-49c1-9f17-59d5264e7c9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.158170 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e946aa8f-18ab-49c1-9f17-59d5264e7c9d" (UID: "e946aa8f-18ab-49c1-9f17-59d5264e7c9d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.189846 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 00:55:36 crc kubenswrapper[4858]: E0218 00:55:36.190238 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e946aa8f-18ab-49c1-9f17-59d5264e7c9d" containerName="nova-metadata-metadata" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.190254 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e946aa8f-18ab-49c1-9f17-59d5264e7c9d" containerName="nova-metadata-metadata" Feb 18 00:55:36 crc kubenswrapper[4858]: E0218 00:55:36.190279 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e946aa8f-18ab-49c1-9f17-59d5264e7c9d" containerName="nova-metadata-log" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.190285 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e946aa8f-18ab-49c1-9f17-59d5264e7c9d" containerName="nova-metadata-log" Feb 18 00:55:36 crc kubenswrapper[4858]: E0218 00:55:36.190314 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0634c49e-271a-4c92-8313-d974f58cd273" containerName="nova-cell1-conductor-db-sync" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.190321 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0634c49e-271a-4c92-8313-d974f58cd273" containerName="nova-cell1-conductor-db-sync" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.190523 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e946aa8f-18ab-49c1-9f17-59d5264e7c9d" containerName="nova-metadata-metadata" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.190538 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0634c49e-271a-4c92-8313-d974f58cd273" containerName="nova-cell1-conductor-db-sync" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.190545 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e946aa8f-18ab-49c1-9f17-59d5264e7c9d" containerName="nova-metadata-log" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.192205 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.197714 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.197745 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.197756 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttmbn\" (UniqueName: \"kubernetes.io/projected/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-kube-api-access-ttmbn\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.197765 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.197775 4858 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e946aa8f-18ab-49c1-9f17-59d5264e7c9d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.202417 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.232537 4858 generic.go:334] "Generic (PLEG): container finished" podID="e946aa8f-18ab-49c1-9f17-59d5264e7c9d" containerID="19b8752491890ccd444012ac6cfc99ac809e17de6a6626ae232ece63dde9c074" exitCode=0 Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.232565 4858 generic.go:334] "Generic (PLEG): container finished" podID="e946aa8f-18ab-49c1-9f17-59d5264e7c9d" containerID="a5ead71c8e5cd49b7b4da6b93ce26a957dc926d0bb43b7504382fca3e6490a03" exitCode=143 Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.232700 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e946aa8f-18ab-49c1-9f17-59d5264e7c9d","Type":"ContainerDied","Data":"19b8752491890ccd444012ac6cfc99ac809e17de6a6626ae232ece63dde9c074"} Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.232749 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e946aa8f-18ab-49c1-9f17-59d5264e7c9d","Type":"ContainerDied","Data":"a5ead71c8e5cd49b7b4da6b93ce26a957dc926d0bb43b7504382fca3e6490a03"} Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.232763 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e946aa8f-18ab-49c1-9f17-59d5264e7c9d","Type":"ContainerDied","Data":"514f50c867a47a0511668a3256efb03f5f2397526aba48c86db0db03d7d96a7a"} Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.232781 4858 scope.go:117] "RemoveContainer" containerID="19b8752491890ccd444012ac6cfc99ac809e17de6a6626ae232ece63dde9c074" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.233128 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.235923 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qxsd9" event={"ID":"0634c49e-271a-4c92-8313-d974f58cd273","Type":"ContainerDied","Data":"8e637a3d93d86b92034e8d85ebbbe7e52fa6414e07f750e43905f143346cd333"} Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.235956 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e637a3d93d86b92034e8d85ebbbe7e52fa6414e07f750e43905f143346cd333" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.236006 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qxsd9" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.247483 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"059e53fe-a613-4e7e-99aa-81c491f09a5a","Type":"ContainerStarted","Data":"213a62a515466372dfc71766702a3019b7c4ea5d3ef0504b684e6d0c612b0cfc"} Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.247641 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1bcc4262-82a0-46b5-bdd4-ec2465032a84" containerName="nova-scheduler-scheduler" containerID="cri-o://edb5a4994c765341d3fcebd23803137cc098ffe7a4f710f043c85d9d1841d0eb" gracePeriod=30 Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.258992 4858 scope.go:117] "RemoveContainer" containerID="a5ead71c8e5cd49b7b4da6b93ce26a957dc926d0bb43b7504382fca3e6490a03" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.291278 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.299290 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46f70137-27be-4f64-9778-cfca8978b247-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"46f70137-27be-4f64-9778-cfca8978b247\") " pod="openstack/nova-cell1-conductor-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.299334 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46f70137-27be-4f64-9778-cfca8978b247-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"46f70137-27be-4f64-9778-cfca8978b247\") " pod="openstack/nova-cell1-conductor-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.299393 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.299553 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jp4f\" (UniqueName: \"kubernetes.io/projected/46f70137-27be-4f64-9778-cfca8978b247-kube-api-access-4jp4f\") pod \"nova-cell1-conductor-0\" (UID: \"46f70137-27be-4f64-9778-cfca8978b247\") " pod="openstack/nova-cell1-conductor-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.308276 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.313971 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.316237 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.316401 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.326824 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.333735 4858 scope.go:117] "RemoveContainer" containerID="19b8752491890ccd444012ac6cfc99ac809e17de6a6626ae232ece63dde9c074" Feb 18 00:55:36 crc kubenswrapper[4858]: E0218 00:55:36.334361 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19b8752491890ccd444012ac6cfc99ac809e17de6a6626ae232ece63dde9c074\": container with ID starting with 19b8752491890ccd444012ac6cfc99ac809e17de6a6626ae232ece63dde9c074 not found: ID does not exist" containerID="19b8752491890ccd444012ac6cfc99ac809e17de6a6626ae232ece63dde9c074" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.334418 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19b8752491890ccd444012ac6cfc99ac809e17de6a6626ae232ece63dde9c074"} err="failed to get container status \"19b8752491890ccd444012ac6cfc99ac809e17de6a6626ae232ece63dde9c074\": rpc error: code = NotFound desc = could not find container \"19b8752491890ccd444012ac6cfc99ac809e17de6a6626ae232ece63dde9c074\": container with ID starting with 19b8752491890ccd444012ac6cfc99ac809e17de6a6626ae232ece63dde9c074 not found: ID does not exist" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.334444 4858 scope.go:117] "RemoveContainer" containerID="a5ead71c8e5cd49b7b4da6b93ce26a957dc926d0bb43b7504382fca3e6490a03" Feb 18 00:55:36 crc kubenswrapper[4858]: E0218 00:55:36.334906 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5ead71c8e5cd49b7b4da6b93ce26a957dc926d0bb43b7504382fca3e6490a03\": container with ID starting with a5ead71c8e5cd49b7b4da6b93ce26a957dc926d0bb43b7504382fca3e6490a03 not found: ID does not exist" containerID="a5ead71c8e5cd49b7b4da6b93ce26a957dc926d0bb43b7504382fca3e6490a03" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.334954 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5ead71c8e5cd49b7b4da6b93ce26a957dc926d0bb43b7504382fca3e6490a03"} err="failed to get container status \"a5ead71c8e5cd49b7b4da6b93ce26a957dc926d0bb43b7504382fca3e6490a03\": rpc error: code = NotFound desc = could not find container \"a5ead71c8e5cd49b7b4da6b93ce26a957dc926d0bb43b7504382fca3e6490a03\": container with ID starting with a5ead71c8e5cd49b7b4da6b93ce26a957dc926d0bb43b7504382fca3e6490a03 not found: ID does not exist" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.334982 4858 scope.go:117] "RemoveContainer" containerID="19b8752491890ccd444012ac6cfc99ac809e17de6a6626ae232ece63dde9c074" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.336453 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19b8752491890ccd444012ac6cfc99ac809e17de6a6626ae232ece63dde9c074"} err="failed to get container status \"19b8752491890ccd444012ac6cfc99ac809e17de6a6626ae232ece63dde9c074\": rpc error: code = NotFound desc = could not find container \"19b8752491890ccd444012ac6cfc99ac809e17de6a6626ae232ece63dde9c074\": container with ID starting with 19b8752491890ccd444012ac6cfc99ac809e17de6a6626ae232ece63dde9c074 not found: ID does not exist" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.337013 4858 scope.go:117] "RemoveContainer" containerID="a5ead71c8e5cd49b7b4da6b93ce26a957dc926d0bb43b7504382fca3e6490a03" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.345819 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5ead71c8e5cd49b7b4da6b93ce26a957dc926d0bb43b7504382fca3e6490a03"} err="failed to get container status \"a5ead71c8e5cd49b7b4da6b93ce26a957dc926d0bb43b7504382fca3e6490a03\": rpc error: code = NotFound desc = could not find container \"a5ead71c8e5cd49b7b4da6b93ce26a957dc926d0bb43b7504382fca3e6490a03\": container with ID starting with a5ead71c8e5cd49b7b4da6b93ce26a957dc926d0bb43b7504382fca3e6490a03 not found: ID does not exist" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.401790 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jp4f\" (UniqueName: \"kubernetes.io/projected/46f70137-27be-4f64-9778-cfca8978b247-kube-api-access-4jp4f\") pod \"nova-cell1-conductor-0\" (UID: \"46f70137-27be-4f64-9778-cfca8978b247\") " pod="openstack/nova-cell1-conductor-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.401938 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46f70137-27be-4f64-9778-cfca8978b247-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"46f70137-27be-4f64-9778-cfca8978b247\") " pod="openstack/nova-cell1-conductor-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.401968 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46f70137-27be-4f64-9778-cfca8978b247-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"46f70137-27be-4f64-9778-cfca8978b247\") " pod="openstack/nova-cell1-conductor-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.408432 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46f70137-27be-4f64-9778-cfca8978b247-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"46f70137-27be-4f64-9778-cfca8978b247\") " pod="openstack/nova-cell1-conductor-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.409083 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46f70137-27be-4f64-9778-cfca8978b247-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"46f70137-27be-4f64-9778-cfca8978b247\") " pod="openstack/nova-cell1-conductor-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.419876 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jp4f\" (UniqueName: \"kubernetes.io/projected/46f70137-27be-4f64-9778-cfca8978b247-kube-api-access-4jp4f\") pod \"nova-cell1-conductor-0\" (UID: \"46f70137-27be-4f64-9778-cfca8978b247\") " pod="openstack/nova-cell1-conductor-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.503220 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a413ec36-cb52-4519-ac7c-e7f126b37892-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a413ec36-cb52-4519-ac7c-e7f126b37892\") " pod="openstack/nova-metadata-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.503328 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a413ec36-cb52-4519-ac7c-e7f126b37892-logs\") pod \"nova-metadata-0\" (UID: \"a413ec36-cb52-4519-ac7c-e7f126b37892\") " pod="openstack/nova-metadata-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.503392 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhggl\" (UniqueName: \"kubernetes.io/projected/a413ec36-cb52-4519-ac7c-e7f126b37892-kube-api-access-fhggl\") pod \"nova-metadata-0\" (UID: \"a413ec36-cb52-4519-ac7c-e7f126b37892\") " pod="openstack/nova-metadata-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.503423 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a413ec36-cb52-4519-ac7c-e7f126b37892-config-data\") pod \"nova-metadata-0\" (UID: \"a413ec36-cb52-4519-ac7c-e7f126b37892\") " pod="openstack/nova-metadata-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.503444 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a413ec36-cb52-4519-ac7c-e7f126b37892-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a413ec36-cb52-4519-ac7c-e7f126b37892\") " pod="openstack/nova-metadata-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.510090 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.607808 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a413ec36-cb52-4519-ac7c-e7f126b37892-logs\") pod \"nova-metadata-0\" (UID: \"a413ec36-cb52-4519-ac7c-e7f126b37892\") " pod="openstack/nova-metadata-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.607972 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhggl\" (UniqueName: \"kubernetes.io/projected/a413ec36-cb52-4519-ac7c-e7f126b37892-kube-api-access-fhggl\") pod \"nova-metadata-0\" (UID: \"a413ec36-cb52-4519-ac7c-e7f126b37892\") " pod="openstack/nova-metadata-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.608023 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a413ec36-cb52-4519-ac7c-e7f126b37892-config-data\") pod \"nova-metadata-0\" (UID: \"a413ec36-cb52-4519-ac7c-e7f126b37892\") " pod="openstack/nova-metadata-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.608056 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a413ec36-cb52-4519-ac7c-e7f126b37892-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a413ec36-cb52-4519-ac7c-e7f126b37892\") " pod="openstack/nova-metadata-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.608140 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a413ec36-cb52-4519-ac7c-e7f126b37892-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a413ec36-cb52-4519-ac7c-e7f126b37892\") " pod="openstack/nova-metadata-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.608212 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a413ec36-cb52-4519-ac7c-e7f126b37892-logs\") pod \"nova-metadata-0\" (UID: \"a413ec36-cb52-4519-ac7c-e7f126b37892\") " pod="openstack/nova-metadata-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.616213 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a413ec36-cb52-4519-ac7c-e7f126b37892-config-data\") pod \"nova-metadata-0\" (UID: \"a413ec36-cb52-4519-ac7c-e7f126b37892\") " pod="openstack/nova-metadata-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.618350 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a413ec36-cb52-4519-ac7c-e7f126b37892-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a413ec36-cb52-4519-ac7c-e7f126b37892\") " pod="openstack/nova-metadata-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.618801 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a413ec36-cb52-4519-ac7c-e7f126b37892-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a413ec36-cb52-4519-ac7c-e7f126b37892\") " pod="openstack/nova-metadata-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.627940 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhggl\" (UniqueName: \"kubernetes.io/projected/a413ec36-cb52-4519-ac7c-e7f126b37892-kube-api-access-fhggl\") pod \"nova-metadata-0\" (UID: \"a413ec36-cb52-4519-ac7c-e7f126b37892\") " pod="openstack/nova-metadata-0" Feb 18 00:55:36 crc kubenswrapper[4858]: I0218 00:55:36.659288 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 00:55:37 crc kubenswrapper[4858]: I0218 00:55:37.016592 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 00:55:37 crc kubenswrapper[4858]: I0218 00:55:37.159869 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:55:37 crc kubenswrapper[4858]: W0218 00:55:37.175511 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda413ec36_cb52_4519_ac7c_e7f126b37892.slice/crio-670b6f30d8fe89cb5e070c36b8a930861191c5412e7a98baa34f4e19b003d610 WatchSource:0}: Error finding container 670b6f30d8fe89cb5e070c36b8a930861191c5412e7a98baa34f4e19b003d610: Status 404 returned error can't find the container with id 670b6f30d8fe89cb5e070c36b8a930861191c5412e7a98baa34f4e19b003d610 Feb 18 00:55:37 crc kubenswrapper[4858]: I0218 00:55:37.260133 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a413ec36-cb52-4519-ac7c-e7f126b37892","Type":"ContainerStarted","Data":"670b6f30d8fe89cb5e070c36b8a930861191c5412e7a98baa34f4e19b003d610"} Feb 18 00:55:37 crc kubenswrapper[4858]: I0218 00:55:37.262621 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"059e53fe-a613-4e7e-99aa-81c491f09a5a","Type":"ContainerStarted","Data":"21c4e2b5ccf6ae5c7b39b5e9b1ee7cec40ede41a5dffdd87419bef84a805e20d"} Feb 18 00:55:37 crc kubenswrapper[4858]: I0218 00:55:37.262663 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"059e53fe-a613-4e7e-99aa-81c491f09a5a","Type":"ContainerStarted","Data":"94dc3cb01920cf48b56581e305bf1faac7f38db4bfcfd87cbd060f0a8be61e43"} Feb 18 00:55:37 crc kubenswrapper[4858]: I0218 00:55:37.263686 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"46f70137-27be-4f64-9778-cfca8978b247","Type":"ContainerStarted","Data":"a0cf2e0bcb1e51ab4a616134726da38374c03690e726dd38bd33e14486a3915c"} Feb 18 00:55:37 crc kubenswrapper[4858]: I0218 00:55:37.263731 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"46f70137-27be-4f64-9778-cfca8978b247","Type":"ContainerStarted","Data":"fe1393b6ae785e96f2f4f524c53b321e1c6c1cc968f38df09d13483a188609fd"} Feb 18 00:55:37 crc kubenswrapper[4858]: I0218 00:55:37.263900 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 18 00:55:37 crc kubenswrapper[4858]: I0218 00:55:37.281999 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=1.28198226 podStartE2EDuration="1.28198226s" podCreationTimestamp="2026-02-18 00:55:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:55:37.278209978 +0000 UTC m=+1290.584046720" watchObservedRunningTime="2026-02-18 00:55:37.28198226 +0000 UTC m=+1290.587818992" Feb 18 00:55:37 crc kubenswrapper[4858]: I0218 00:55:37.444874 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e946aa8f-18ab-49c1-9f17-59d5264e7c9d" path="/var/lib/kubelet/pods/e946aa8f-18ab-49c1-9f17-59d5264e7c9d/volumes" Feb 18 00:55:38 crc kubenswrapper[4858]: I0218 00:55:38.281739 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"059e53fe-a613-4e7e-99aa-81c491f09a5a","Type":"ContainerStarted","Data":"6a8a3c594dfcd558fc066a74be34cb2dc82dd3a8c1281eeef85957435465c4cc"} Feb 18 00:55:38 crc kubenswrapper[4858]: I0218 00:55:38.284254 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a413ec36-cb52-4519-ac7c-e7f126b37892","Type":"ContainerStarted","Data":"fa0e59608c186c1b983f7566b2193c589a983c1cf36cbf4d6a2ebd3bcdd92279"} Feb 18 00:55:38 crc kubenswrapper[4858]: I0218 00:55:38.284310 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a413ec36-cb52-4519-ac7c-e7f126b37892","Type":"ContainerStarted","Data":"dc79404e843e24d042d63cefb64d7d09c7c7de2f32bf65859526b0ba40559a87"} Feb 18 00:55:38 crc kubenswrapper[4858]: I0218 00:55:38.306124 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.306107774 podStartE2EDuration="2.306107774s" podCreationTimestamp="2026-02-18 00:55:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:55:38.304566926 +0000 UTC m=+1291.610403678" watchObservedRunningTime="2026-02-18 00:55:38.306107774 +0000 UTC m=+1291.611944506" Feb 18 00:55:38 crc kubenswrapper[4858]: E0218 00:55:38.416480 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="edb5a4994c765341d3fcebd23803137cc098ffe7a4f710f043c85d9d1841d0eb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 00:55:38 crc kubenswrapper[4858]: E0218 00:55:38.420164 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="edb5a4994c765341d3fcebd23803137cc098ffe7a4f710f043c85d9d1841d0eb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 00:55:38 crc kubenswrapper[4858]: E0218 00:55:38.421453 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="edb5a4994c765341d3fcebd23803137cc098ffe7a4f710f043c85d9d1841d0eb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 00:55:38 crc kubenswrapper[4858]: E0218 00:55:38.421572 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="1bcc4262-82a0-46b5-bdd4-ec2465032a84" containerName="nova-scheduler-scheduler" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.194821 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.282176 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k59ch\" (UniqueName: \"kubernetes.io/projected/694d16d9-f59c-48f2-90a6-3994846c0ca5-kube-api-access-k59ch\") pod \"694d16d9-f59c-48f2-90a6-3994846c0ca5\" (UID: \"694d16d9-f59c-48f2-90a6-3994846c0ca5\") " Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.282368 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/694d16d9-f59c-48f2-90a6-3994846c0ca5-logs\") pod \"694d16d9-f59c-48f2-90a6-3994846c0ca5\" (UID: \"694d16d9-f59c-48f2-90a6-3994846c0ca5\") " Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.282405 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/694d16d9-f59c-48f2-90a6-3994846c0ca5-config-data\") pod \"694d16d9-f59c-48f2-90a6-3994846c0ca5\" (UID: \"694d16d9-f59c-48f2-90a6-3994846c0ca5\") " Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.282432 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/694d16d9-f59c-48f2-90a6-3994846c0ca5-combined-ca-bundle\") pod \"694d16d9-f59c-48f2-90a6-3994846c0ca5\" (UID: \"694d16d9-f59c-48f2-90a6-3994846c0ca5\") " Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.283764 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/694d16d9-f59c-48f2-90a6-3994846c0ca5-logs" (OuterVolumeSpecName: "logs") pod "694d16d9-f59c-48f2-90a6-3994846c0ca5" (UID: "694d16d9-f59c-48f2-90a6-3994846c0ca5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.331706 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/694d16d9-f59c-48f2-90a6-3994846c0ca5-kube-api-access-k59ch" (OuterVolumeSpecName: "kube-api-access-k59ch") pod "694d16d9-f59c-48f2-90a6-3994846c0ca5" (UID: "694d16d9-f59c-48f2-90a6-3994846c0ca5"). InnerVolumeSpecName "kube-api-access-k59ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.333032 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/694d16d9-f59c-48f2-90a6-3994846c0ca5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "694d16d9-f59c-48f2-90a6-3994846c0ca5" (UID: "694d16d9-f59c-48f2-90a6-3994846c0ca5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.336801 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/694d16d9-f59c-48f2-90a6-3994846c0ca5-config-data" (OuterVolumeSpecName: "config-data") pod "694d16d9-f59c-48f2-90a6-3994846c0ca5" (UID: "694d16d9-f59c-48f2-90a6-3994846c0ca5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.343058 4858 generic.go:334] "Generic (PLEG): container finished" podID="694d16d9-f59c-48f2-90a6-3994846c0ca5" containerID="2c445a07d703735417845b3184855bc07fec1ad35ca110c23d0c875b0fce62e2" exitCode=0 Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.343150 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.343174 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"694d16d9-f59c-48f2-90a6-3994846c0ca5","Type":"ContainerDied","Data":"2c445a07d703735417845b3184855bc07fec1ad35ca110c23d0c875b0fce62e2"} Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.344258 4858 scope.go:117] "RemoveContainer" containerID="2c445a07d703735417845b3184855bc07fec1ad35ca110c23d0c875b0fce62e2" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.344069 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"694d16d9-f59c-48f2-90a6-3994846c0ca5","Type":"ContainerDied","Data":"9e1c22135b1c9e66852da82704745a10b290f9667e50845fdc7a55218b402587"} Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.360664 4858 generic.go:334] "Generic (PLEG): container finished" podID="1bcc4262-82a0-46b5-bdd4-ec2465032a84" containerID="edb5a4994c765341d3fcebd23803137cc098ffe7a4f710f043c85d9d1841d0eb" exitCode=0 Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.360909 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1bcc4262-82a0-46b5-bdd4-ec2465032a84","Type":"ContainerDied","Data":"edb5a4994c765341d3fcebd23803137cc098ffe7a4f710f043c85d9d1841d0eb"} Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.387229 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/694d16d9-f59c-48f2-90a6-3994846c0ca5-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.387252 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/694d16d9-f59c-48f2-90a6-3994846c0ca5-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.387261 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/694d16d9-f59c-48f2-90a6-3994846c0ca5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.387269 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k59ch\" (UniqueName: \"kubernetes.io/projected/694d16d9-f59c-48f2-90a6-3994846c0ca5-kube-api-access-k59ch\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.400845 4858 scope.go:117] "RemoveContainer" containerID="0c413792cf8b06a52fe996e2ee2a09a962fe31b669ff48cb9cde9f2d6677ed6c" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.413074 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.421800 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.447582 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 00:55:40 crc kubenswrapper[4858]: E0218 00:55:40.448102 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="694d16d9-f59c-48f2-90a6-3994846c0ca5" containerName="nova-api-api" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.448122 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="694d16d9-f59c-48f2-90a6-3994846c0ca5" containerName="nova-api-api" Feb 18 00:55:40 crc kubenswrapper[4858]: E0218 00:55:40.448147 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="694d16d9-f59c-48f2-90a6-3994846c0ca5" containerName="nova-api-log" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.448155 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="694d16d9-f59c-48f2-90a6-3994846c0ca5" containerName="nova-api-log" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.448487 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="694d16d9-f59c-48f2-90a6-3994846c0ca5" containerName="nova-api-log" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.448553 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="694d16d9-f59c-48f2-90a6-3994846c0ca5" containerName="nova-api-api" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.450131 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.459524 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.469120 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.488704 4858 scope.go:117] "RemoveContainer" containerID="2c445a07d703735417845b3184855bc07fec1ad35ca110c23d0c875b0fce62e2" Feb 18 00:55:40 crc kubenswrapper[4858]: E0218 00:55:40.489390 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c445a07d703735417845b3184855bc07fec1ad35ca110c23d0c875b0fce62e2\": container with ID starting with 2c445a07d703735417845b3184855bc07fec1ad35ca110c23d0c875b0fce62e2 not found: ID does not exist" containerID="2c445a07d703735417845b3184855bc07fec1ad35ca110c23d0c875b0fce62e2" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.489436 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c445a07d703735417845b3184855bc07fec1ad35ca110c23d0c875b0fce62e2"} err="failed to get container status \"2c445a07d703735417845b3184855bc07fec1ad35ca110c23d0c875b0fce62e2\": rpc error: code = NotFound desc = could not find container \"2c445a07d703735417845b3184855bc07fec1ad35ca110c23d0c875b0fce62e2\": container with ID starting with 2c445a07d703735417845b3184855bc07fec1ad35ca110c23d0c875b0fce62e2 not found: ID does not exist" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.489464 4858 scope.go:117] "RemoveContainer" containerID="0c413792cf8b06a52fe996e2ee2a09a962fe31b669ff48cb9cde9f2d6677ed6c" Feb 18 00:55:40 crc kubenswrapper[4858]: E0218 00:55:40.490408 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c413792cf8b06a52fe996e2ee2a09a962fe31b669ff48cb9cde9f2d6677ed6c\": container with ID starting with 0c413792cf8b06a52fe996e2ee2a09a962fe31b669ff48cb9cde9f2d6677ed6c not found: ID does not exist" containerID="0c413792cf8b06a52fe996e2ee2a09a962fe31b669ff48cb9cde9f2d6677ed6c" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.490435 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c413792cf8b06a52fe996e2ee2a09a962fe31b669ff48cb9cde9f2d6677ed6c"} err="failed to get container status \"0c413792cf8b06a52fe996e2ee2a09a962fe31b669ff48cb9cde9f2d6677ed6c\": rpc error: code = NotFound desc = could not find container \"0c413792cf8b06a52fe996e2ee2a09a962fe31b669ff48cb9cde9f2d6677ed6c\": container with ID starting with 0c413792cf8b06a52fe996e2ee2a09a962fe31b669ff48cb9cde9f2d6677ed6c not found: ID does not exist" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.504660 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.590404 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bcc4262-82a0-46b5-bdd4-ec2465032a84-config-data\") pod \"1bcc4262-82a0-46b5-bdd4-ec2465032a84\" (UID: \"1bcc4262-82a0-46b5-bdd4-ec2465032a84\") " Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.590929 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcc4262-82a0-46b5-bdd4-ec2465032a84-combined-ca-bundle\") pod \"1bcc4262-82a0-46b5-bdd4-ec2465032a84\" (UID: \"1bcc4262-82a0-46b5-bdd4-ec2465032a84\") " Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.591046 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptfpk\" (UniqueName: \"kubernetes.io/projected/1bcc4262-82a0-46b5-bdd4-ec2465032a84-kube-api-access-ptfpk\") pod \"1bcc4262-82a0-46b5-bdd4-ec2465032a84\" (UID: \"1bcc4262-82a0-46b5-bdd4-ec2465032a84\") " Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.591357 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c54f9c6-6908-40ec-af96-cc27f133dc87-config-data\") pod \"nova-api-0\" (UID: \"2c54f9c6-6908-40ec-af96-cc27f133dc87\") " pod="openstack/nova-api-0" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.591584 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjms7\" (UniqueName: \"kubernetes.io/projected/2c54f9c6-6908-40ec-af96-cc27f133dc87-kube-api-access-jjms7\") pod \"nova-api-0\" (UID: \"2c54f9c6-6908-40ec-af96-cc27f133dc87\") " pod="openstack/nova-api-0" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.591674 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c54f9c6-6908-40ec-af96-cc27f133dc87-logs\") pod \"nova-api-0\" (UID: \"2c54f9c6-6908-40ec-af96-cc27f133dc87\") " pod="openstack/nova-api-0" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.591739 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c54f9c6-6908-40ec-af96-cc27f133dc87-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2c54f9c6-6908-40ec-af96-cc27f133dc87\") " pod="openstack/nova-api-0" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.594765 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bcc4262-82a0-46b5-bdd4-ec2465032a84-kube-api-access-ptfpk" (OuterVolumeSpecName: "kube-api-access-ptfpk") pod "1bcc4262-82a0-46b5-bdd4-ec2465032a84" (UID: "1bcc4262-82a0-46b5-bdd4-ec2465032a84"). InnerVolumeSpecName "kube-api-access-ptfpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.619700 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bcc4262-82a0-46b5-bdd4-ec2465032a84-config-data" (OuterVolumeSpecName: "config-data") pod "1bcc4262-82a0-46b5-bdd4-ec2465032a84" (UID: "1bcc4262-82a0-46b5-bdd4-ec2465032a84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.621095 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bcc4262-82a0-46b5-bdd4-ec2465032a84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bcc4262-82a0-46b5-bdd4-ec2465032a84" (UID: "1bcc4262-82a0-46b5-bdd4-ec2465032a84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.693480 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjms7\" (UniqueName: \"kubernetes.io/projected/2c54f9c6-6908-40ec-af96-cc27f133dc87-kube-api-access-jjms7\") pod \"nova-api-0\" (UID: \"2c54f9c6-6908-40ec-af96-cc27f133dc87\") " pod="openstack/nova-api-0" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.693572 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c54f9c6-6908-40ec-af96-cc27f133dc87-logs\") pod \"nova-api-0\" (UID: \"2c54f9c6-6908-40ec-af96-cc27f133dc87\") " pod="openstack/nova-api-0" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.693610 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c54f9c6-6908-40ec-af96-cc27f133dc87-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2c54f9c6-6908-40ec-af96-cc27f133dc87\") " pod="openstack/nova-api-0" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.693633 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c54f9c6-6908-40ec-af96-cc27f133dc87-config-data\") pod \"nova-api-0\" (UID: \"2c54f9c6-6908-40ec-af96-cc27f133dc87\") " pod="openstack/nova-api-0" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.693694 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcc4262-82a0-46b5-bdd4-ec2465032a84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.693704 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptfpk\" (UniqueName: \"kubernetes.io/projected/1bcc4262-82a0-46b5-bdd4-ec2465032a84-kube-api-access-ptfpk\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.693715 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bcc4262-82a0-46b5-bdd4-ec2465032a84-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.694484 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c54f9c6-6908-40ec-af96-cc27f133dc87-logs\") pod \"nova-api-0\" (UID: \"2c54f9c6-6908-40ec-af96-cc27f133dc87\") " pod="openstack/nova-api-0" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.697798 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c54f9c6-6908-40ec-af96-cc27f133dc87-config-data\") pod \"nova-api-0\" (UID: \"2c54f9c6-6908-40ec-af96-cc27f133dc87\") " pod="openstack/nova-api-0" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.698521 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c54f9c6-6908-40ec-af96-cc27f133dc87-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2c54f9c6-6908-40ec-af96-cc27f133dc87\") " pod="openstack/nova-api-0" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.715896 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjms7\" (UniqueName: \"kubernetes.io/projected/2c54f9c6-6908-40ec-af96-cc27f133dc87-kube-api-access-jjms7\") pod \"nova-api-0\" (UID: \"2c54f9c6-6908-40ec-af96-cc27f133dc87\") " pod="openstack/nova-api-0" Feb 18 00:55:40 crc kubenswrapper[4858]: I0218 00:55:40.795251 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.264867 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.378178 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1bcc4262-82a0-46b5-bdd4-ec2465032a84","Type":"ContainerDied","Data":"b8f64447e675a6d1615cf72880b7519aa45fab294b23e79a7849d6bb5d3fbd53"} Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.378452 4858 scope.go:117] "RemoveContainer" containerID="edb5a4994c765341d3fcebd23803137cc098ffe7a4f710f043c85d9d1841d0eb" Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.378479 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.381193 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c54f9c6-6908-40ec-af96-cc27f133dc87","Type":"ContainerStarted","Data":"72dbda1cd9f3d8f8a78262a3345fb771aee4ab66529f460aaa1330ea7e88e6f2"} Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.461861 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="694d16d9-f59c-48f2-90a6-3994846c0ca5" path="/var/lib/kubelet/pods/694d16d9-f59c-48f2-90a6-3994846c0ca5/volumes" Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.462654 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.473598 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.473717 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.498041 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:55:41 crc kubenswrapper[4858]: E0218 00:55:41.498700 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bcc4262-82a0-46b5-bdd4-ec2465032a84" containerName="nova-scheduler-scheduler" Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.498719 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bcc4262-82a0-46b5-bdd4-ec2465032a84" containerName="nova-scheduler-scheduler" Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.498922 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bcc4262-82a0-46b5-bdd4-ec2465032a84" containerName="nova-scheduler-scheduler" Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.499853 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.505148 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.511541 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.617685 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cff65b88-5359-4a4c-a85c-d502b0958655-config-data\") pod \"nova-scheduler-0\" (UID: \"cff65b88-5359-4a4c-a85c-d502b0958655\") " pod="openstack/nova-scheduler-0" Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.618019 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc4s4\" (UniqueName: \"kubernetes.io/projected/cff65b88-5359-4a4c-a85c-d502b0958655-kube-api-access-mc4s4\") pod \"nova-scheduler-0\" (UID: \"cff65b88-5359-4a4c-a85c-d502b0958655\") " pod="openstack/nova-scheduler-0" Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.618115 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cff65b88-5359-4a4c-a85c-d502b0958655-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cff65b88-5359-4a4c-a85c-d502b0958655\") " pod="openstack/nova-scheduler-0" Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.660164 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.660213 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.719651 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cff65b88-5359-4a4c-a85c-d502b0958655-config-data\") pod \"nova-scheduler-0\" (UID: \"cff65b88-5359-4a4c-a85c-d502b0958655\") " pod="openstack/nova-scheduler-0" Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.719705 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc4s4\" (UniqueName: \"kubernetes.io/projected/cff65b88-5359-4a4c-a85c-d502b0958655-kube-api-access-mc4s4\") pod \"nova-scheduler-0\" (UID: \"cff65b88-5359-4a4c-a85c-d502b0958655\") " pod="openstack/nova-scheduler-0" Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.719802 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cff65b88-5359-4a4c-a85c-d502b0958655-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cff65b88-5359-4a4c-a85c-d502b0958655\") " pod="openstack/nova-scheduler-0" Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.723198 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cff65b88-5359-4a4c-a85c-d502b0958655-config-data\") pod \"nova-scheduler-0\" (UID: \"cff65b88-5359-4a4c-a85c-d502b0958655\") " pod="openstack/nova-scheduler-0" Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.724401 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cff65b88-5359-4a4c-a85c-d502b0958655-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cff65b88-5359-4a4c-a85c-d502b0958655\") " pod="openstack/nova-scheduler-0" Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.746264 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc4s4\" (UniqueName: \"kubernetes.io/projected/cff65b88-5359-4a4c-a85c-d502b0958655-kube-api-access-mc4s4\") pod \"nova-scheduler-0\" (UID: \"cff65b88-5359-4a4c-a85c-d502b0958655\") " pod="openstack/nova-scheduler-0" Feb 18 00:55:41 crc kubenswrapper[4858]: I0218 00:55:41.838536 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 00:55:42 crc kubenswrapper[4858]: I0218 00:55:42.322233 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:55:42 crc kubenswrapper[4858]: I0218 00:55:42.400522 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cff65b88-5359-4a4c-a85c-d502b0958655","Type":"ContainerStarted","Data":"98aa3672486a5575b372aa59beeb623c187b89ef4eb01af5dd7d1bac944edf50"} Feb 18 00:55:42 crc kubenswrapper[4858]: I0218 00:55:42.402540 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c54f9c6-6908-40ec-af96-cc27f133dc87","Type":"ContainerStarted","Data":"f798ff721335e80059b8419296a189afa528639ef085882479d21bdaed558860"} Feb 18 00:55:42 crc kubenswrapper[4858]: I0218 00:55:42.402575 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c54f9c6-6908-40ec-af96-cc27f133dc87","Type":"ContainerStarted","Data":"612d021b9c52e5b86272577d1bdd892038798c838fe03f33d98f20990bbf504d"} Feb 18 00:55:42 crc kubenswrapper[4858]: I0218 00:55:42.421154 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.421127672 podStartE2EDuration="2.421127672s" podCreationTimestamp="2026-02-18 00:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:55:42.420268611 +0000 UTC m=+1295.726105353" watchObservedRunningTime="2026-02-18 00:55:42.421127672 +0000 UTC m=+1295.726964424" Feb 18 00:55:43 crc kubenswrapper[4858]: I0218 00:55:43.416025 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cff65b88-5359-4a4c-a85c-d502b0958655","Type":"ContainerStarted","Data":"b7ccfe17b67f842a2c7787ee0076fd9dce772b920a2c51781a44ce67c5f45cbd"} Feb 18 00:55:43 crc kubenswrapper[4858]: I0218 00:55:43.432375 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bcc4262-82a0-46b5-bdd4-ec2465032a84" path="/var/lib/kubelet/pods/1bcc4262-82a0-46b5-bdd4-ec2465032a84/volumes" Feb 18 00:55:43 crc kubenswrapper[4858]: I0218 00:55:43.444086 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.444058876 podStartE2EDuration="2.444058876s" podCreationTimestamp="2026-02-18 00:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:55:43.435460647 +0000 UTC m=+1296.741297379" watchObservedRunningTime="2026-02-18 00:55:43.444058876 +0000 UTC m=+1296.749895618" Feb 18 00:55:46 crc kubenswrapper[4858]: I0218 00:55:46.558465 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 18 00:55:46 crc kubenswrapper[4858]: I0218 00:55:46.660010 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 00:55:46 crc kubenswrapper[4858]: I0218 00:55:46.660072 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 00:55:46 crc kubenswrapper[4858]: I0218 00:55:46.843431 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 00:55:47 crc kubenswrapper[4858]: I0218 00:55:47.676701 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a413ec36-cb52-4519-ac7c-e7f126b37892" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 00:55:47 crc kubenswrapper[4858]: I0218 00:55:47.676780 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a413ec36-cb52-4519-ac7c-e7f126b37892" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 00:55:48 crc kubenswrapper[4858]: I0218 00:55:48.471926 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"059e53fe-a613-4e7e-99aa-81c491f09a5a","Type":"ContainerStarted","Data":"c92c271444633e05ba5681facde284f69eb1c4f193073b227a5d103f83926ac0"} Feb 18 00:55:48 crc kubenswrapper[4858]: I0218 00:55:48.472442 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 00:55:48 crc kubenswrapper[4858]: I0218 00:55:48.509197 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.300943866 podStartE2EDuration="14.509179485s" podCreationTimestamp="2026-02-18 00:55:34 +0000 UTC" firstStartedPulling="2026-02-18 00:55:35.578897589 +0000 UTC m=+1288.884734321" lastFinishedPulling="2026-02-18 00:55:47.787133178 +0000 UTC m=+1301.092969940" observedRunningTime="2026-02-18 00:55:48.497586372 +0000 UTC m=+1301.803423104" watchObservedRunningTime="2026-02-18 00:55:48.509179485 +0000 UTC m=+1301.815016217" Feb 18 00:55:50 crc kubenswrapper[4858]: I0218 00:55:50.796555 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 00:55:50 crc kubenswrapper[4858]: I0218 00:55:50.797162 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 00:55:51 crc kubenswrapper[4858]: I0218 00:55:51.839735 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 00:55:51 crc kubenswrapper[4858]: I0218 00:55:51.870819 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 00:55:51 crc kubenswrapper[4858]: I0218 00:55:51.878717 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2c54f9c6-6908-40ec-af96-cc27f133dc87" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.226:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 00:55:51 crc kubenswrapper[4858]: I0218 00:55:51.878880 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2c54f9c6-6908-40ec-af96-cc27f133dc87" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.226:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 00:55:52 crc kubenswrapper[4858]: I0218 00:55:52.576723 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 00:55:55 crc kubenswrapper[4858]: I0218 00:55:55.264993 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:55:55 crc kubenswrapper[4858]: I0218 00:55:55.265377 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:55:56 crc kubenswrapper[4858]: I0218 00:55:56.666762 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 00:55:56 crc kubenswrapper[4858]: I0218 00:55:56.668290 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 00:55:56 crc kubenswrapper[4858]: I0218 00:55:56.676064 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 00:55:57 crc kubenswrapper[4858]: I0218 00:55:57.615003 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 00:55:59 crc kubenswrapper[4858]: I0218 00:55:59.552206 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:55:59 crc kubenswrapper[4858]: I0218 00:55:59.643663 4858 generic.go:334] "Generic (PLEG): container finished" podID="057985d0-f8a3-4924-af98-13a66b730569" containerID="bd0d5443a1ef8a41588da6429d3acf0b9d0eae2ed48fb78df10e91e0337f2ac4" exitCode=137 Feb 18 00:55:59 crc kubenswrapper[4858]: I0218 00:55:59.643781 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"057985d0-f8a3-4924-af98-13a66b730569","Type":"ContainerDied","Data":"bd0d5443a1ef8a41588da6429d3acf0b9d0eae2ed48fb78df10e91e0337f2ac4"} Feb 18 00:55:59 crc kubenswrapper[4858]: I0218 00:55:59.643796 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:55:59 crc kubenswrapper[4858]: I0218 00:55:59.643874 4858 scope.go:117] "RemoveContainer" containerID="bd0d5443a1ef8a41588da6429d3acf0b9d0eae2ed48fb78df10e91e0337f2ac4" Feb 18 00:55:59 crc kubenswrapper[4858]: I0218 00:55:59.643852 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"057985d0-f8a3-4924-af98-13a66b730569","Type":"ContainerDied","Data":"5e086b6fe196aa712afb3ac4dd1ea71df396631436819b06befa820583d13297"} Feb 18 00:55:59 crc kubenswrapper[4858]: I0218 00:55:59.655345 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv9wr\" (UniqueName: \"kubernetes.io/projected/057985d0-f8a3-4924-af98-13a66b730569-kube-api-access-kv9wr\") pod \"057985d0-f8a3-4924-af98-13a66b730569\" (UID: \"057985d0-f8a3-4924-af98-13a66b730569\") " Feb 18 00:55:59 crc kubenswrapper[4858]: I0218 00:55:59.656067 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/057985d0-f8a3-4924-af98-13a66b730569-config-data\") pod \"057985d0-f8a3-4924-af98-13a66b730569\" (UID: \"057985d0-f8a3-4924-af98-13a66b730569\") " Feb 18 00:55:59 crc kubenswrapper[4858]: I0218 00:55:59.656104 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/057985d0-f8a3-4924-af98-13a66b730569-combined-ca-bundle\") pod \"057985d0-f8a3-4924-af98-13a66b730569\" (UID: \"057985d0-f8a3-4924-af98-13a66b730569\") " Feb 18 00:55:59 crc kubenswrapper[4858]: I0218 00:55:59.662606 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/057985d0-f8a3-4924-af98-13a66b730569-kube-api-access-kv9wr" (OuterVolumeSpecName: "kube-api-access-kv9wr") pod "057985d0-f8a3-4924-af98-13a66b730569" (UID: "057985d0-f8a3-4924-af98-13a66b730569"). InnerVolumeSpecName "kube-api-access-kv9wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:55:59 crc kubenswrapper[4858]: I0218 00:55:59.689289 4858 scope.go:117] "RemoveContainer" containerID="bd0d5443a1ef8a41588da6429d3acf0b9d0eae2ed48fb78df10e91e0337f2ac4" Feb 18 00:55:59 crc kubenswrapper[4858]: E0218 00:55:59.690014 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd0d5443a1ef8a41588da6429d3acf0b9d0eae2ed48fb78df10e91e0337f2ac4\": container with ID starting with bd0d5443a1ef8a41588da6429d3acf0b9d0eae2ed48fb78df10e91e0337f2ac4 not found: ID does not exist" containerID="bd0d5443a1ef8a41588da6429d3acf0b9d0eae2ed48fb78df10e91e0337f2ac4" Feb 18 00:55:59 crc kubenswrapper[4858]: I0218 00:55:59.690048 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd0d5443a1ef8a41588da6429d3acf0b9d0eae2ed48fb78df10e91e0337f2ac4"} err="failed to get container status \"bd0d5443a1ef8a41588da6429d3acf0b9d0eae2ed48fb78df10e91e0337f2ac4\": rpc error: code = NotFound desc = could not find container \"bd0d5443a1ef8a41588da6429d3acf0b9d0eae2ed48fb78df10e91e0337f2ac4\": container with ID starting with bd0d5443a1ef8a41588da6429d3acf0b9d0eae2ed48fb78df10e91e0337f2ac4 not found: ID does not exist" Feb 18 00:55:59 crc kubenswrapper[4858]: I0218 00:55:59.695059 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/057985d0-f8a3-4924-af98-13a66b730569-config-data" (OuterVolumeSpecName: "config-data") pod "057985d0-f8a3-4924-af98-13a66b730569" (UID: "057985d0-f8a3-4924-af98-13a66b730569"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:55:59 crc kubenswrapper[4858]: I0218 00:55:59.710663 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/057985d0-f8a3-4924-af98-13a66b730569-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "057985d0-f8a3-4924-af98-13a66b730569" (UID: "057985d0-f8a3-4924-af98-13a66b730569"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:55:59 crc kubenswrapper[4858]: I0218 00:55:59.758750 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv9wr\" (UniqueName: \"kubernetes.io/projected/057985d0-f8a3-4924-af98-13a66b730569-kube-api-access-kv9wr\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:59 crc kubenswrapper[4858]: I0218 00:55:59.758779 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/057985d0-f8a3-4924-af98-13a66b730569-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:59 crc kubenswrapper[4858]: I0218 00:55:59.758792 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/057985d0-f8a3-4924-af98-13a66b730569-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:55:59 crc kubenswrapper[4858]: I0218 00:55:59.993213 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.003149 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.032260 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 00:56:00 crc kubenswrapper[4858]: E0218 00:56:00.032895 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="057985d0-f8a3-4924-af98-13a66b730569" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.032918 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="057985d0-f8a3-4924-af98-13a66b730569" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.033171 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="057985d0-f8a3-4924-af98-13a66b730569" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.034123 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.036369 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.037308 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.037415 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.045409 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.066640 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57fe3041-27cb-4e28-949c-7d5a37d033fc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"57fe3041-27cb-4e28-949c-7d5a37d033fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.066827 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/57fe3041-27cb-4e28-949c-7d5a37d033fc-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"57fe3041-27cb-4e28-949c-7d5a37d033fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.067030 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57fe3041-27cb-4e28-949c-7d5a37d033fc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"57fe3041-27cb-4e28-949c-7d5a37d033fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.067147 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wj5v\" (UniqueName: \"kubernetes.io/projected/57fe3041-27cb-4e28-949c-7d5a37d033fc-kube-api-access-5wj5v\") pod \"nova-cell1-novncproxy-0\" (UID: \"57fe3041-27cb-4e28-949c-7d5a37d033fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.067313 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/57fe3041-27cb-4e28-949c-7d5a37d033fc-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"57fe3041-27cb-4e28-949c-7d5a37d033fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.168818 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/57fe3041-27cb-4e28-949c-7d5a37d033fc-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"57fe3041-27cb-4e28-949c-7d5a37d033fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.168945 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57fe3041-27cb-4e28-949c-7d5a37d033fc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"57fe3041-27cb-4e28-949c-7d5a37d033fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.168975 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wj5v\" (UniqueName: \"kubernetes.io/projected/57fe3041-27cb-4e28-949c-7d5a37d033fc-kube-api-access-5wj5v\") pod \"nova-cell1-novncproxy-0\" (UID: \"57fe3041-27cb-4e28-949c-7d5a37d033fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.169044 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/57fe3041-27cb-4e28-949c-7d5a37d033fc-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"57fe3041-27cb-4e28-949c-7d5a37d033fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.169097 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57fe3041-27cb-4e28-949c-7d5a37d033fc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"57fe3041-27cb-4e28-949c-7d5a37d033fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.174205 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/57fe3041-27cb-4e28-949c-7d5a37d033fc-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"57fe3041-27cb-4e28-949c-7d5a37d033fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.174718 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/57fe3041-27cb-4e28-949c-7d5a37d033fc-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"57fe3041-27cb-4e28-949c-7d5a37d033fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.175268 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57fe3041-27cb-4e28-949c-7d5a37d033fc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"57fe3041-27cb-4e28-949c-7d5a37d033fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.179083 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57fe3041-27cb-4e28-949c-7d5a37d033fc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"57fe3041-27cb-4e28-949c-7d5a37d033fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.184816 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wj5v\" (UniqueName: \"kubernetes.io/projected/57fe3041-27cb-4e28-949c-7d5a37d033fc-kube-api-access-5wj5v\") pod \"nova-cell1-novncproxy-0\" (UID: \"57fe3041-27cb-4e28-949c-7d5a37d033fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.411032 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.800482 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.800863 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.801291 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.801395 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.803800 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 00:56:00 crc kubenswrapper[4858]: I0218 00:56:00.804505 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.026950 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-th7f4"] Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.028761 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.044361 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.069435 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-th7f4"] Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.100705 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-th7f4\" (UID: \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\") " pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.100795 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-config\") pod \"dnsmasq-dns-5fd9b586ff-th7f4\" (UID: \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\") " pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.100841 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-th7f4\" (UID: \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\") " pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.100891 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kspxc\" (UniqueName: \"kubernetes.io/projected/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-kube-api-access-kspxc\") pod \"dnsmasq-dns-5fd9b586ff-th7f4\" (UID: \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\") " pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.100919 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-th7f4\" (UID: \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\") " pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.100951 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-th7f4\" (UID: \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\") " pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.202635 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-th7f4\" (UID: \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\") " pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.202737 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-th7f4\" (UID: \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\") " pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.202806 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-config\") pod \"dnsmasq-dns-5fd9b586ff-th7f4\" (UID: \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\") " pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.202852 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-th7f4\" (UID: \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\") " pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.202897 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kspxc\" (UniqueName: \"kubernetes.io/projected/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-kube-api-access-kspxc\") pod \"dnsmasq-dns-5fd9b586ff-th7f4\" (UID: \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\") " pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.203019 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-th7f4\" (UID: \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\") " pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.204274 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-config\") pod \"dnsmasq-dns-5fd9b586ff-th7f4\" (UID: \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\") " pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.204283 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-th7f4\" (UID: \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\") " pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.204300 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-th7f4\" (UID: \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\") " pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.204645 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-th7f4\" (UID: \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\") " pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.204767 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-th7f4\" (UID: \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\") " pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.220709 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kspxc\" (UniqueName: \"kubernetes.io/projected/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-kube-api-access-kspxc\") pod \"dnsmasq-dns-5fd9b586ff-th7f4\" (UID: \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\") " pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.309392 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.441442 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="057985d0-f8a3-4924-af98-13a66b730569" path="/var/lib/kubelet/pods/057985d0-f8a3-4924-af98-13a66b730569/volumes" Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.700999 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"57fe3041-27cb-4e28-949c-7d5a37d033fc","Type":"ContainerStarted","Data":"a5af84e50f850952ca00906fc405ef3896c52b7be6a4ffa0f50b532d7a5f7772"} Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.701053 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"57fe3041-27cb-4e28-949c-7d5a37d033fc","Type":"ContainerStarted","Data":"4ee142c05e30c2d3b3c5d39bc9e13195dceaf4f720f0ed6b6b0c00aae9780b13"} Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.731359 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.7313449 podStartE2EDuration="2.7313449s" podCreationTimestamp="2026-02-18 00:55:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:56:01.722778142 +0000 UTC m=+1315.028614874" watchObservedRunningTime="2026-02-18 00:56:01.7313449 +0000 UTC m=+1315.037181632" Feb 18 00:56:01 crc kubenswrapper[4858]: I0218 00:56:01.855316 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-th7f4"] Feb 18 00:56:02 crc kubenswrapper[4858]: I0218 00:56:02.711318 4858 generic.go:334] "Generic (PLEG): container finished" podID="84e1bbfb-0a80-457f-9fd3-eeb803d1fe96" containerID="14710cf2b11b5a3355041cd81bbc38dfd1e499869a801342b6e603b21dcef631" exitCode=0 Feb 18 00:56:02 crc kubenswrapper[4858]: I0218 00:56:02.711418 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" event={"ID":"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96","Type":"ContainerDied","Data":"14710cf2b11b5a3355041cd81bbc38dfd1e499869a801342b6e603b21dcef631"} Feb 18 00:56:02 crc kubenswrapper[4858]: I0218 00:56:02.712283 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" event={"ID":"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96","Type":"ContainerStarted","Data":"ee0477df752ba161dc005c82d1447e5abe3245be545b42a5c6ea9df436fbfaad"} Feb 18 00:56:03 crc kubenswrapper[4858]: I0218 00:56:03.131151 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:56:03 crc kubenswrapper[4858]: I0218 00:56:03.131413 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="059e53fe-a613-4e7e-99aa-81c491f09a5a" containerName="ceilometer-central-agent" containerID="cri-o://94dc3cb01920cf48b56581e305bf1faac7f38db4bfcfd87cbd060f0a8be61e43" gracePeriod=30 Feb 18 00:56:03 crc kubenswrapper[4858]: I0218 00:56:03.131693 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="059e53fe-a613-4e7e-99aa-81c491f09a5a" containerName="proxy-httpd" containerID="cri-o://c92c271444633e05ba5681facde284f69eb1c4f193073b227a5d103f83926ac0" gracePeriod=30 Feb 18 00:56:03 crc kubenswrapper[4858]: I0218 00:56:03.131778 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="059e53fe-a613-4e7e-99aa-81c491f09a5a" containerName="sg-core" containerID="cri-o://6a8a3c594dfcd558fc066a74be34cb2dc82dd3a8c1281eeef85957435465c4cc" gracePeriod=30 Feb 18 00:56:03 crc kubenswrapper[4858]: I0218 00:56:03.131814 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="059e53fe-a613-4e7e-99aa-81c491f09a5a" containerName="ceilometer-notification-agent" containerID="cri-o://21c4e2b5ccf6ae5c7b39b5e9b1ee7cec40ede41a5dffdd87419bef84a805e20d" gracePeriod=30 Feb 18 00:56:03 crc kubenswrapper[4858]: I0218 00:56:03.146966 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="059e53fe-a613-4e7e-99aa-81c491f09a5a" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.223:3000/\": EOF" Feb 18 00:56:03 crc kubenswrapper[4858]: I0218 00:56:03.370754 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:56:03 crc kubenswrapper[4858]: I0218 00:56:03.722066 4858 generic.go:334] "Generic (PLEG): container finished" podID="059e53fe-a613-4e7e-99aa-81c491f09a5a" containerID="c92c271444633e05ba5681facde284f69eb1c4f193073b227a5d103f83926ac0" exitCode=0 Feb 18 00:56:03 crc kubenswrapper[4858]: I0218 00:56:03.722096 4858 generic.go:334] "Generic (PLEG): container finished" podID="059e53fe-a613-4e7e-99aa-81c491f09a5a" containerID="6a8a3c594dfcd558fc066a74be34cb2dc82dd3a8c1281eeef85957435465c4cc" exitCode=2 Feb 18 00:56:03 crc kubenswrapper[4858]: I0218 00:56:03.722103 4858 generic.go:334] "Generic (PLEG): container finished" podID="059e53fe-a613-4e7e-99aa-81c491f09a5a" containerID="94dc3cb01920cf48b56581e305bf1faac7f38db4bfcfd87cbd060f0a8be61e43" exitCode=0 Feb 18 00:56:03 crc kubenswrapper[4858]: I0218 00:56:03.722150 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"059e53fe-a613-4e7e-99aa-81c491f09a5a","Type":"ContainerDied","Data":"c92c271444633e05ba5681facde284f69eb1c4f193073b227a5d103f83926ac0"} Feb 18 00:56:03 crc kubenswrapper[4858]: I0218 00:56:03.722176 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"059e53fe-a613-4e7e-99aa-81c491f09a5a","Type":"ContainerDied","Data":"6a8a3c594dfcd558fc066a74be34cb2dc82dd3a8c1281eeef85957435465c4cc"} Feb 18 00:56:03 crc kubenswrapper[4858]: I0218 00:56:03.722187 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"059e53fe-a613-4e7e-99aa-81c491f09a5a","Type":"ContainerDied","Data":"94dc3cb01920cf48b56581e305bf1faac7f38db4bfcfd87cbd060f0a8be61e43"} Feb 18 00:56:03 crc kubenswrapper[4858]: I0218 00:56:03.723761 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" event={"ID":"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96","Type":"ContainerStarted","Data":"2af501d01cf5590a259566819bec41f25f8238f3333029d30c1ec9bd26ecfc38"} Feb 18 00:56:03 crc kubenswrapper[4858]: I0218 00:56:03.723883 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2c54f9c6-6908-40ec-af96-cc27f133dc87" containerName="nova-api-log" containerID="cri-o://612d021b9c52e5b86272577d1bdd892038798c838fe03f33d98f20990bbf504d" gracePeriod=30 Feb 18 00:56:03 crc kubenswrapper[4858]: I0218 00:56:03.723981 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2c54f9c6-6908-40ec-af96-cc27f133dc87" containerName="nova-api-api" containerID="cri-o://f798ff721335e80059b8419296a189afa528639ef085882479d21bdaed558860" gracePeriod=30 Feb 18 00:56:03 crc kubenswrapper[4858]: I0218 00:56:03.748660 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" podStartSLOduration=3.748645125 podStartE2EDuration="3.748645125s" podCreationTimestamp="2026-02-18 00:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:56:03.746081292 +0000 UTC m=+1317.051918024" watchObservedRunningTime="2026-02-18 00:56:03.748645125 +0000 UTC m=+1317.054481857" Feb 18 00:56:04 crc kubenswrapper[4858]: I0218 00:56:04.736404 4858 generic.go:334] "Generic (PLEG): container finished" podID="2c54f9c6-6908-40ec-af96-cc27f133dc87" containerID="612d021b9c52e5b86272577d1bdd892038798c838fe03f33d98f20990bbf504d" exitCode=143 Feb 18 00:56:04 crc kubenswrapper[4858]: I0218 00:56:04.736501 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c54f9c6-6908-40ec-af96-cc27f133dc87","Type":"ContainerDied","Data":"612d021b9c52e5b86272577d1bdd892038798c838fe03f33d98f20990bbf504d"} Feb 18 00:56:04 crc kubenswrapper[4858]: I0218 00:56:04.736944 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" Feb 18 00:56:05 crc kubenswrapper[4858]: I0218 00:56:05.069295 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="059e53fe-a613-4e7e-99aa-81c491f09a5a" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.223:3000/\": dial tcp 10.217.0.223:3000: connect: connection refused" Feb 18 00:56:05 crc kubenswrapper[4858]: I0218 00:56:05.411681 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.495731 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.649039 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c54f9c6-6908-40ec-af96-cc27f133dc87-config-data\") pod \"2c54f9c6-6908-40ec-af96-cc27f133dc87\" (UID: \"2c54f9c6-6908-40ec-af96-cc27f133dc87\") " Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.649101 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjms7\" (UniqueName: \"kubernetes.io/projected/2c54f9c6-6908-40ec-af96-cc27f133dc87-kube-api-access-jjms7\") pod \"2c54f9c6-6908-40ec-af96-cc27f133dc87\" (UID: \"2c54f9c6-6908-40ec-af96-cc27f133dc87\") " Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.649314 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c54f9c6-6908-40ec-af96-cc27f133dc87-logs\") pod \"2c54f9c6-6908-40ec-af96-cc27f133dc87\" (UID: \"2c54f9c6-6908-40ec-af96-cc27f133dc87\") " Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.649353 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c54f9c6-6908-40ec-af96-cc27f133dc87-combined-ca-bundle\") pod \"2c54f9c6-6908-40ec-af96-cc27f133dc87\" (UID: \"2c54f9c6-6908-40ec-af96-cc27f133dc87\") " Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.650253 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c54f9c6-6908-40ec-af96-cc27f133dc87-logs" (OuterVolumeSpecName: "logs") pod "2c54f9c6-6908-40ec-af96-cc27f133dc87" (UID: "2c54f9c6-6908-40ec-af96-cc27f133dc87"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.657414 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c54f9c6-6908-40ec-af96-cc27f133dc87-kube-api-access-jjms7" (OuterVolumeSpecName: "kube-api-access-jjms7") pod "2c54f9c6-6908-40ec-af96-cc27f133dc87" (UID: "2c54f9c6-6908-40ec-af96-cc27f133dc87"). InnerVolumeSpecName "kube-api-access-jjms7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.697166 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c54f9c6-6908-40ec-af96-cc27f133dc87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c54f9c6-6908-40ec-af96-cc27f133dc87" (UID: "2c54f9c6-6908-40ec-af96-cc27f133dc87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.699637 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c54f9c6-6908-40ec-af96-cc27f133dc87-config-data" (OuterVolumeSpecName: "config-data") pod "2c54f9c6-6908-40ec-af96-cc27f133dc87" (UID: "2c54f9c6-6908-40ec-af96-cc27f133dc87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.752101 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c54f9c6-6908-40ec-af96-cc27f133dc87-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.752135 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjms7\" (UniqueName: \"kubernetes.io/projected/2c54f9c6-6908-40ec-af96-cc27f133dc87-kube-api-access-jjms7\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.752145 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c54f9c6-6908-40ec-af96-cc27f133dc87-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.752153 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c54f9c6-6908-40ec-af96-cc27f133dc87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.770255 4858 generic.go:334] "Generic (PLEG): container finished" podID="059e53fe-a613-4e7e-99aa-81c491f09a5a" containerID="21c4e2b5ccf6ae5c7b39b5e9b1ee7cec40ede41a5dffdd87419bef84a805e20d" exitCode=0 Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.770328 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"059e53fe-a613-4e7e-99aa-81c491f09a5a","Type":"ContainerDied","Data":"21c4e2b5ccf6ae5c7b39b5e9b1ee7cec40ede41a5dffdd87419bef84a805e20d"} Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.771540 4858 generic.go:334] "Generic (PLEG): container finished" podID="2c54f9c6-6908-40ec-af96-cc27f133dc87" containerID="f798ff721335e80059b8419296a189afa528639ef085882479d21bdaed558860" exitCode=0 Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.771561 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c54f9c6-6908-40ec-af96-cc27f133dc87","Type":"ContainerDied","Data":"f798ff721335e80059b8419296a189afa528639ef085882479d21bdaed558860"} Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.771576 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c54f9c6-6908-40ec-af96-cc27f133dc87","Type":"ContainerDied","Data":"72dbda1cd9f3d8f8a78262a3345fb771aee4ab66529f460aaa1330ea7e88e6f2"} Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.771591 4858 scope.go:117] "RemoveContainer" containerID="f798ff721335e80059b8419296a189afa528639ef085882479d21bdaed558860" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.771720 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.793609 4858 scope.go:117] "RemoveContainer" containerID="612d021b9c52e5b86272577d1bdd892038798c838fe03f33d98f20990bbf504d" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.808335 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.823568 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.827096 4858 scope.go:117] "RemoveContainer" containerID="f798ff721335e80059b8419296a189afa528639ef085882479d21bdaed558860" Feb 18 00:56:07 crc kubenswrapper[4858]: E0218 00:56:07.827569 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f798ff721335e80059b8419296a189afa528639ef085882479d21bdaed558860\": container with ID starting with f798ff721335e80059b8419296a189afa528639ef085882479d21bdaed558860 not found: ID does not exist" containerID="f798ff721335e80059b8419296a189afa528639ef085882479d21bdaed558860" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.827617 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f798ff721335e80059b8419296a189afa528639ef085882479d21bdaed558860"} err="failed to get container status \"f798ff721335e80059b8419296a189afa528639ef085882479d21bdaed558860\": rpc error: code = NotFound desc = could not find container \"f798ff721335e80059b8419296a189afa528639ef085882479d21bdaed558860\": container with ID starting with f798ff721335e80059b8419296a189afa528639ef085882479d21bdaed558860 not found: ID does not exist" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.827643 4858 scope.go:117] "RemoveContainer" containerID="612d021b9c52e5b86272577d1bdd892038798c838fe03f33d98f20990bbf504d" Feb 18 00:56:07 crc kubenswrapper[4858]: E0218 00:56:07.827972 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"612d021b9c52e5b86272577d1bdd892038798c838fe03f33d98f20990bbf504d\": container with ID starting with 612d021b9c52e5b86272577d1bdd892038798c838fe03f33d98f20990bbf504d not found: ID does not exist" containerID="612d021b9c52e5b86272577d1bdd892038798c838fe03f33d98f20990bbf504d" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.828000 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"612d021b9c52e5b86272577d1bdd892038798c838fe03f33d98f20990bbf504d"} err="failed to get container status \"612d021b9c52e5b86272577d1bdd892038798c838fe03f33d98f20990bbf504d\": rpc error: code = NotFound desc = could not find container \"612d021b9c52e5b86272577d1bdd892038798c838fe03f33d98f20990bbf504d\": container with ID starting with 612d021b9c52e5b86272577d1bdd892038798c838fe03f33d98f20990bbf504d not found: ID does not exist" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.838562 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 00:56:07 crc kubenswrapper[4858]: E0218 00:56:07.839027 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c54f9c6-6908-40ec-af96-cc27f133dc87" containerName="nova-api-api" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.839046 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c54f9c6-6908-40ec-af96-cc27f133dc87" containerName="nova-api-api" Feb 18 00:56:07 crc kubenswrapper[4858]: E0218 00:56:07.839077 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c54f9c6-6908-40ec-af96-cc27f133dc87" containerName="nova-api-log" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.839083 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c54f9c6-6908-40ec-af96-cc27f133dc87" containerName="nova-api-log" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.840899 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c54f9c6-6908-40ec-af96-cc27f133dc87" containerName="nova-api-api" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.840961 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c54f9c6-6908-40ec-af96-cc27f133dc87" containerName="nova-api-log" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.842223 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.849265 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.850047 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.850160 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.851777 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.958836 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd970003-e8fd-4cb2-b95e-f85e79329ae1-config-data\") pod \"nova-api-0\" (UID: \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\") " pod="openstack/nova-api-0" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.959397 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd970003-e8fd-4cb2-b95e-f85e79329ae1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\") " pod="openstack/nova-api-0" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.959549 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd970003-e8fd-4cb2-b95e-f85e79329ae1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\") " pod="openstack/nova-api-0" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.959694 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xsh5\" (UniqueName: \"kubernetes.io/projected/bd970003-e8fd-4cb2-b95e-f85e79329ae1-kube-api-access-2xsh5\") pod \"nova-api-0\" (UID: \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\") " pod="openstack/nova-api-0" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.959791 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd970003-e8fd-4cb2-b95e-f85e79329ae1-logs\") pod \"nova-api-0\" (UID: \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\") " pod="openstack/nova-api-0" Feb 18 00:56:07 crc kubenswrapper[4858]: I0218 00:56:07.959948 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd970003-e8fd-4cb2-b95e-f85e79329ae1-public-tls-certs\") pod \"nova-api-0\" (UID: \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\") " pod="openstack/nova-api-0" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.062214 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd970003-e8fd-4cb2-b95e-f85e79329ae1-public-tls-certs\") pod \"nova-api-0\" (UID: \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\") " pod="openstack/nova-api-0" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.062317 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd970003-e8fd-4cb2-b95e-f85e79329ae1-config-data\") pod \"nova-api-0\" (UID: \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\") " pod="openstack/nova-api-0" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.062363 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd970003-e8fd-4cb2-b95e-f85e79329ae1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\") " pod="openstack/nova-api-0" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.062388 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd970003-e8fd-4cb2-b95e-f85e79329ae1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\") " pod="openstack/nova-api-0" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.062436 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xsh5\" (UniqueName: \"kubernetes.io/projected/bd970003-e8fd-4cb2-b95e-f85e79329ae1-kube-api-access-2xsh5\") pod \"nova-api-0\" (UID: \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\") " pod="openstack/nova-api-0" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.062457 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd970003-e8fd-4cb2-b95e-f85e79329ae1-logs\") pod \"nova-api-0\" (UID: \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\") " pod="openstack/nova-api-0" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.062976 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd970003-e8fd-4cb2-b95e-f85e79329ae1-logs\") pod \"nova-api-0\" (UID: \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\") " pod="openstack/nova-api-0" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.066812 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd970003-e8fd-4cb2-b95e-f85e79329ae1-public-tls-certs\") pod \"nova-api-0\" (UID: \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\") " pod="openstack/nova-api-0" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.066903 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd970003-e8fd-4cb2-b95e-f85e79329ae1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\") " pod="openstack/nova-api-0" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.067270 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd970003-e8fd-4cb2-b95e-f85e79329ae1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\") " pod="openstack/nova-api-0" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.067593 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd970003-e8fd-4cb2-b95e-f85e79329ae1-config-data\") pod \"nova-api-0\" (UID: \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\") " pod="openstack/nova-api-0" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.082701 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xsh5\" (UniqueName: \"kubernetes.io/projected/bd970003-e8fd-4cb2-b95e-f85e79329ae1-kube-api-access-2xsh5\") pod \"nova-api-0\" (UID: \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\") " pod="openstack/nova-api-0" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.164197 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.178940 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.265114 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-sg-core-conf-yaml\") pod \"059e53fe-a613-4e7e-99aa-81c491f09a5a\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.265173 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/059e53fe-a613-4e7e-99aa-81c491f09a5a-run-httpd\") pod \"059e53fe-a613-4e7e-99aa-81c491f09a5a\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.265270 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-scripts\") pod \"059e53fe-a613-4e7e-99aa-81c491f09a5a\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.265290 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz82q\" (UniqueName: \"kubernetes.io/projected/059e53fe-a613-4e7e-99aa-81c491f09a5a-kube-api-access-dz82q\") pod \"059e53fe-a613-4e7e-99aa-81c491f09a5a\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.265312 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-ceilometer-tls-certs\") pod \"059e53fe-a613-4e7e-99aa-81c491f09a5a\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.265339 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-combined-ca-bundle\") pod \"059e53fe-a613-4e7e-99aa-81c491f09a5a\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.265444 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/059e53fe-a613-4e7e-99aa-81c491f09a5a-log-httpd\") pod \"059e53fe-a613-4e7e-99aa-81c491f09a5a\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.265508 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-config-data\") pod \"059e53fe-a613-4e7e-99aa-81c491f09a5a\" (UID: \"059e53fe-a613-4e7e-99aa-81c491f09a5a\") " Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.265756 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/059e53fe-a613-4e7e-99aa-81c491f09a5a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "059e53fe-a613-4e7e-99aa-81c491f09a5a" (UID: "059e53fe-a613-4e7e-99aa-81c491f09a5a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.266528 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/059e53fe-a613-4e7e-99aa-81c491f09a5a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.266956 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/059e53fe-a613-4e7e-99aa-81c491f09a5a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "059e53fe-a613-4e7e-99aa-81c491f09a5a" (UID: "059e53fe-a613-4e7e-99aa-81c491f09a5a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.270249 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/059e53fe-a613-4e7e-99aa-81c491f09a5a-kube-api-access-dz82q" (OuterVolumeSpecName: "kube-api-access-dz82q") pod "059e53fe-a613-4e7e-99aa-81c491f09a5a" (UID: "059e53fe-a613-4e7e-99aa-81c491f09a5a"). InnerVolumeSpecName "kube-api-access-dz82q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.270786 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-scripts" (OuterVolumeSpecName: "scripts") pod "059e53fe-a613-4e7e-99aa-81c491f09a5a" (UID: "059e53fe-a613-4e7e-99aa-81c491f09a5a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.325526 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "059e53fe-a613-4e7e-99aa-81c491f09a5a" (UID: "059e53fe-a613-4e7e-99aa-81c491f09a5a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.371772 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.372083 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz82q\" (UniqueName: \"kubernetes.io/projected/059e53fe-a613-4e7e-99aa-81c491f09a5a-kube-api-access-dz82q\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.372163 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/059e53fe-a613-4e7e-99aa-81c491f09a5a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.373263 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.373225 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "059e53fe-a613-4e7e-99aa-81c491f09a5a" (UID: "059e53fe-a613-4e7e-99aa-81c491f09a5a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.438434 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "059e53fe-a613-4e7e-99aa-81c491f09a5a" (UID: "059e53fe-a613-4e7e-99aa-81c491f09a5a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.444909 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-config-data" (OuterVolumeSpecName: "config-data") pod "059e53fe-a613-4e7e-99aa-81c491f09a5a" (UID: "059e53fe-a613-4e7e-99aa-81c491f09a5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.474810 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.474837 4858 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.474846 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/059e53fe-a613-4e7e-99aa-81c491f09a5a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.719469 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.781701 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bd970003-e8fd-4cb2-b95e-f85e79329ae1","Type":"ContainerStarted","Data":"4e66fcadd6c76fc55999cfa2a7b393126f6a14f50b0e6dcb140038101fce4fb2"} Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.786379 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"059e53fe-a613-4e7e-99aa-81c491f09a5a","Type":"ContainerDied","Data":"213a62a515466372dfc71766702a3019b7c4ea5d3ef0504b684e6d0c612b0cfc"} Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.786450 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.786462 4858 scope.go:117] "RemoveContainer" containerID="c92c271444633e05ba5681facde284f69eb1c4f193073b227a5d103f83926ac0" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.826709 4858 scope.go:117] "RemoveContainer" containerID="6a8a3c594dfcd558fc066a74be34cb2dc82dd3a8c1281eeef85957435465c4cc" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.833212 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.856427 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.865889 4858 scope.go:117] "RemoveContainer" containerID="21c4e2b5ccf6ae5c7b39b5e9b1ee7cec40ede41a5dffdd87419bef84a805e20d" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.870757 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:56:08 crc kubenswrapper[4858]: E0218 00:56:08.871406 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="059e53fe-a613-4e7e-99aa-81c491f09a5a" containerName="ceilometer-notification-agent" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.871469 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="059e53fe-a613-4e7e-99aa-81c491f09a5a" containerName="ceilometer-notification-agent" Feb 18 00:56:08 crc kubenswrapper[4858]: E0218 00:56:08.871488 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="059e53fe-a613-4e7e-99aa-81c491f09a5a" containerName="ceilometer-central-agent" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.871517 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="059e53fe-a613-4e7e-99aa-81c491f09a5a" containerName="ceilometer-central-agent" Feb 18 00:56:08 crc kubenswrapper[4858]: E0218 00:56:08.871559 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="059e53fe-a613-4e7e-99aa-81c491f09a5a" containerName="proxy-httpd" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.871566 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="059e53fe-a613-4e7e-99aa-81c491f09a5a" containerName="proxy-httpd" Feb 18 00:56:08 crc kubenswrapper[4858]: E0218 00:56:08.871585 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="059e53fe-a613-4e7e-99aa-81c491f09a5a" containerName="sg-core" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.871593 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="059e53fe-a613-4e7e-99aa-81c491f09a5a" containerName="sg-core" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.871811 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="059e53fe-a613-4e7e-99aa-81c491f09a5a" containerName="ceilometer-central-agent" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.871837 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="059e53fe-a613-4e7e-99aa-81c491f09a5a" containerName="ceilometer-notification-agent" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.871858 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="059e53fe-a613-4e7e-99aa-81c491f09a5a" containerName="proxy-httpd" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.871876 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="059e53fe-a613-4e7e-99aa-81c491f09a5a" containerName="sg-core" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.881476 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.883673 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.883918 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.884038 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.897068 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.928358 4858 scope.go:117] "RemoveContainer" containerID="94dc3cb01920cf48b56581e305bf1faac7f38db4bfcfd87cbd060f0a8be61e43" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.983488 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-scripts\") pod \"ceilometer-0\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " pod="openstack/ceilometer-0" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.983642 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " pod="openstack/ceilometer-0" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.983794 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6b4bc81-da80-4ab1-89b2-4ece6e825118-log-httpd\") pod \"ceilometer-0\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " pod="openstack/ceilometer-0" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.983840 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " pod="openstack/ceilometer-0" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.983880 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6b4bc81-da80-4ab1-89b2-4ece6e825118-run-httpd\") pod \"ceilometer-0\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " pod="openstack/ceilometer-0" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.983964 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " pod="openstack/ceilometer-0" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.983996 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-config-data\") pod \"ceilometer-0\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " pod="openstack/ceilometer-0" Feb 18 00:56:08 crc kubenswrapper[4858]: I0218 00:56:08.984071 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkstv\" (UniqueName: \"kubernetes.io/projected/d6b4bc81-da80-4ab1-89b2-4ece6e825118-kube-api-access-gkstv\") pod \"ceilometer-0\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " pod="openstack/ceilometer-0" Feb 18 00:56:09 crc kubenswrapper[4858]: I0218 00:56:09.085226 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " pod="openstack/ceilometer-0" Feb 18 00:56:09 crc kubenswrapper[4858]: I0218 00:56:09.085303 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6b4bc81-da80-4ab1-89b2-4ece6e825118-run-httpd\") pod \"ceilometer-0\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " pod="openstack/ceilometer-0" Feb 18 00:56:09 crc kubenswrapper[4858]: I0218 00:56:09.085363 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " pod="openstack/ceilometer-0" Feb 18 00:56:09 crc kubenswrapper[4858]: I0218 00:56:09.085389 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-config-data\") pod \"ceilometer-0\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " pod="openstack/ceilometer-0" Feb 18 00:56:09 crc kubenswrapper[4858]: I0218 00:56:09.085429 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkstv\" (UniqueName: \"kubernetes.io/projected/d6b4bc81-da80-4ab1-89b2-4ece6e825118-kube-api-access-gkstv\") pod \"ceilometer-0\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " pod="openstack/ceilometer-0" Feb 18 00:56:09 crc kubenswrapper[4858]: I0218 00:56:09.085563 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-scripts\") pod \"ceilometer-0\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " pod="openstack/ceilometer-0" Feb 18 00:56:09 crc kubenswrapper[4858]: I0218 00:56:09.085631 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " pod="openstack/ceilometer-0" Feb 18 00:56:09 crc kubenswrapper[4858]: I0218 00:56:09.085714 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6b4bc81-da80-4ab1-89b2-4ece6e825118-log-httpd\") pod \"ceilometer-0\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " pod="openstack/ceilometer-0" Feb 18 00:56:09 crc kubenswrapper[4858]: I0218 00:56:09.085875 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6b4bc81-da80-4ab1-89b2-4ece6e825118-run-httpd\") pod \"ceilometer-0\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " pod="openstack/ceilometer-0" Feb 18 00:56:09 crc kubenswrapper[4858]: I0218 00:56:09.086070 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6b4bc81-da80-4ab1-89b2-4ece6e825118-log-httpd\") pod \"ceilometer-0\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " pod="openstack/ceilometer-0" Feb 18 00:56:09 crc kubenswrapper[4858]: I0218 00:56:09.090673 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " pod="openstack/ceilometer-0" Feb 18 00:56:09 crc kubenswrapper[4858]: I0218 00:56:09.091457 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " pod="openstack/ceilometer-0" Feb 18 00:56:09 crc kubenswrapper[4858]: I0218 00:56:09.091705 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-config-data\") pod \"ceilometer-0\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " pod="openstack/ceilometer-0" Feb 18 00:56:09 crc kubenswrapper[4858]: I0218 00:56:09.096447 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-scripts\") pod \"ceilometer-0\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " pod="openstack/ceilometer-0" Feb 18 00:56:09 crc kubenswrapper[4858]: I0218 00:56:09.099350 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " pod="openstack/ceilometer-0" Feb 18 00:56:09 crc kubenswrapper[4858]: I0218 00:56:09.103919 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkstv\" (UniqueName: \"kubernetes.io/projected/d6b4bc81-da80-4ab1-89b2-4ece6e825118-kube-api-access-gkstv\") pod \"ceilometer-0\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " pod="openstack/ceilometer-0" Feb 18 00:56:09 crc kubenswrapper[4858]: I0218 00:56:09.207444 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:56:09 crc kubenswrapper[4858]: I0218 00:56:09.430797 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="059e53fe-a613-4e7e-99aa-81c491f09a5a" path="/var/lib/kubelet/pods/059e53fe-a613-4e7e-99aa-81c491f09a5a/volumes" Feb 18 00:56:09 crc kubenswrapper[4858]: I0218 00:56:09.432022 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c54f9c6-6908-40ec-af96-cc27f133dc87" path="/var/lib/kubelet/pods/2c54f9c6-6908-40ec-af96-cc27f133dc87/volumes" Feb 18 00:56:09 crc kubenswrapper[4858]: W0218 00:56:09.727077 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6b4bc81_da80_4ab1_89b2_4ece6e825118.slice/crio-2dec41fc12bdf4db43619607a035b094b502e7e39949701bc6f833fefa87efdb WatchSource:0}: Error finding container 2dec41fc12bdf4db43619607a035b094b502e7e39949701bc6f833fefa87efdb: Status 404 returned error can't find the container with id 2dec41fc12bdf4db43619607a035b094b502e7e39949701bc6f833fefa87efdb Feb 18 00:56:09 crc kubenswrapper[4858]: I0218 00:56:09.732177 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:56:09 crc kubenswrapper[4858]: I0218 00:56:09.796949 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6b4bc81-da80-4ab1-89b2-4ece6e825118","Type":"ContainerStarted","Data":"2dec41fc12bdf4db43619607a035b094b502e7e39949701bc6f833fefa87efdb"} Feb 18 00:56:09 crc kubenswrapper[4858]: I0218 00:56:09.801733 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bd970003-e8fd-4cb2-b95e-f85e79329ae1","Type":"ContainerStarted","Data":"c6be8962f910de0c1c570d75d29254362fed140ac28a0cc96d1f36916b906e9d"} Feb 18 00:56:09 crc kubenswrapper[4858]: I0218 00:56:09.801788 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bd970003-e8fd-4cb2-b95e-f85e79329ae1","Type":"ContainerStarted","Data":"5a745410e175a1468404448b62a9e718ddde7fd04e61fbd4743b485b5757c6bc"} Feb 18 00:56:09 crc kubenswrapper[4858]: I0218 00:56:09.839083 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.839054206 podStartE2EDuration="2.839054206s" podCreationTimestamp="2026-02-18 00:56:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:56:09.818871894 +0000 UTC m=+1323.124708636" watchObservedRunningTime="2026-02-18 00:56:09.839054206 +0000 UTC m=+1323.144890948" Feb 18 00:56:10 crc kubenswrapper[4858]: I0218 00:56:10.411632 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:56:10 crc kubenswrapper[4858]: I0218 00:56:10.442413 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:56:10 crc kubenswrapper[4858]: I0218 00:56:10.814980 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6b4bc81-da80-4ab1-89b2-4ece6e825118","Type":"ContainerStarted","Data":"43143daaacbcdf8a03d10aa77cb57594c7b0475131a50b08a4145af8289cc961"} Feb 18 00:56:10 crc kubenswrapper[4858]: I0218 00:56:10.833370 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:56:11 crc kubenswrapper[4858]: I0218 00:56:11.084152 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-jbz45"] Feb 18 00:56:11 crc kubenswrapper[4858]: I0218 00:56:11.085694 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jbz45" Feb 18 00:56:11 crc kubenswrapper[4858]: I0218 00:56:11.087294 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 18 00:56:11 crc kubenswrapper[4858]: I0218 00:56:11.087465 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 18 00:56:11 crc kubenswrapper[4858]: I0218 00:56:11.094750 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jbz45"] Feb 18 00:56:11 crc kubenswrapper[4858]: I0218 00:56:11.234975 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkfkf\" (UniqueName: \"kubernetes.io/projected/d0cdda4b-3de4-484a-aa99-5ebde30e05d6-kube-api-access-zkfkf\") pod \"nova-cell1-cell-mapping-jbz45\" (UID: \"d0cdda4b-3de4-484a-aa99-5ebde30e05d6\") " pod="openstack/nova-cell1-cell-mapping-jbz45" Feb 18 00:56:11 crc kubenswrapper[4858]: I0218 00:56:11.235018 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cdda4b-3de4-484a-aa99-5ebde30e05d6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jbz45\" (UID: \"d0cdda4b-3de4-484a-aa99-5ebde30e05d6\") " pod="openstack/nova-cell1-cell-mapping-jbz45" Feb 18 00:56:11 crc kubenswrapper[4858]: I0218 00:56:11.235880 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cdda4b-3de4-484a-aa99-5ebde30e05d6-scripts\") pod \"nova-cell1-cell-mapping-jbz45\" (UID: \"d0cdda4b-3de4-484a-aa99-5ebde30e05d6\") " pod="openstack/nova-cell1-cell-mapping-jbz45" Feb 18 00:56:11 crc kubenswrapper[4858]: I0218 00:56:11.236057 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cdda4b-3de4-484a-aa99-5ebde30e05d6-config-data\") pod \"nova-cell1-cell-mapping-jbz45\" (UID: \"d0cdda4b-3de4-484a-aa99-5ebde30e05d6\") " pod="openstack/nova-cell1-cell-mapping-jbz45" Feb 18 00:56:11 crc kubenswrapper[4858]: I0218 00:56:11.310676 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" Feb 18 00:56:11 crc kubenswrapper[4858]: I0218 00:56:11.338659 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cdda4b-3de4-484a-aa99-5ebde30e05d6-scripts\") pod \"nova-cell1-cell-mapping-jbz45\" (UID: \"d0cdda4b-3de4-484a-aa99-5ebde30e05d6\") " pod="openstack/nova-cell1-cell-mapping-jbz45" Feb 18 00:56:11 crc kubenswrapper[4858]: I0218 00:56:11.338930 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cdda4b-3de4-484a-aa99-5ebde30e05d6-config-data\") pod \"nova-cell1-cell-mapping-jbz45\" (UID: \"d0cdda4b-3de4-484a-aa99-5ebde30e05d6\") " pod="openstack/nova-cell1-cell-mapping-jbz45" Feb 18 00:56:11 crc kubenswrapper[4858]: I0218 00:56:11.339005 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkfkf\" (UniqueName: \"kubernetes.io/projected/d0cdda4b-3de4-484a-aa99-5ebde30e05d6-kube-api-access-zkfkf\") pod \"nova-cell1-cell-mapping-jbz45\" (UID: \"d0cdda4b-3de4-484a-aa99-5ebde30e05d6\") " pod="openstack/nova-cell1-cell-mapping-jbz45" Feb 18 00:56:11 crc kubenswrapper[4858]: I0218 00:56:11.339023 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cdda4b-3de4-484a-aa99-5ebde30e05d6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jbz45\" (UID: \"d0cdda4b-3de4-484a-aa99-5ebde30e05d6\") " pod="openstack/nova-cell1-cell-mapping-jbz45" Feb 18 00:56:11 crc kubenswrapper[4858]: I0218 00:56:11.343954 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cdda4b-3de4-484a-aa99-5ebde30e05d6-config-data\") pod \"nova-cell1-cell-mapping-jbz45\" (UID: \"d0cdda4b-3de4-484a-aa99-5ebde30e05d6\") " pod="openstack/nova-cell1-cell-mapping-jbz45" Feb 18 00:56:11 crc kubenswrapper[4858]: I0218 00:56:11.362232 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cdda4b-3de4-484a-aa99-5ebde30e05d6-scripts\") pod \"nova-cell1-cell-mapping-jbz45\" (UID: \"d0cdda4b-3de4-484a-aa99-5ebde30e05d6\") " pod="openstack/nova-cell1-cell-mapping-jbz45" Feb 18 00:56:11 crc kubenswrapper[4858]: I0218 00:56:11.363813 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cdda4b-3de4-484a-aa99-5ebde30e05d6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jbz45\" (UID: \"d0cdda4b-3de4-484a-aa99-5ebde30e05d6\") " pod="openstack/nova-cell1-cell-mapping-jbz45" Feb 18 00:56:11 crc kubenswrapper[4858]: I0218 00:56:11.382695 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-bnkqb"] Feb 18 00:56:11 crc kubenswrapper[4858]: I0218 00:56:11.383190 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78cd565959-bnkqb" podUID="69e75e4d-3e34-492b-8be5-15be3867f605" containerName="dnsmasq-dns" containerID="cri-o://7ba75c2834264dc9531fe9a625497684c95afe88fd4408c499b1b95a40e1b5a0" gracePeriod=10 Feb 18 00:56:11 crc kubenswrapper[4858]: I0218 00:56:11.395186 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkfkf\" (UniqueName: \"kubernetes.io/projected/d0cdda4b-3de4-484a-aa99-5ebde30e05d6-kube-api-access-zkfkf\") pod \"nova-cell1-cell-mapping-jbz45\" (UID: \"d0cdda4b-3de4-484a-aa99-5ebde30e05d6\") " pod="openstack/nova-cell1-cell-mapping-jbz45" Feb 18 00:56:11 crc kubenswrapper[4858]: I0218 00:56:11.407487 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jbz45" Feb 18 00:56:11 crc kubenswrapper[4858]: I0218 00:56:11.864744 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6b4bc81-da80-4ab1-89b2-4ece6e825118","Type":"ContainerStarted","Data":"f41decab432703f0f263c81cd87dc523d7a5c2bdd3945711618e2defccf5445c"} Feb 18 00:56:11 crc kubenswrapper[4858]: I0218 00:56:11.865015 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6b4bc81-da80-4ab1-89b2-4ece6e825118","Type":"ContainerStarted","Data":"e1084691d2fe7f4dffb34a9824f7e81df9c3c8552314420e0b1c13f808ce6461"} Feb 18 00:56:11 crc kubenswrapper[4858]: I0218 00:56:11.867360 4858 generic.go:334] "Generic (PLEG): container finished" podID="69e75e4d-3e34-492b-8be5-15be3867f605" containerID="7ba75c2834264dc9531fe9a625497684c95afe88fd4408c499b1b95a40e1b5a0" exitCode=0 Feb 18 00:56:11 crc kubenswrapper[4858]: I0218 00:56:11.868635 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-bnkqb" event={"ID":"69e75e4d-3e34-492b-8be5-15be3867f605","Type":"ContainerDied","Data":"7ba75c2834264dc9531fe9a625497684c95afe88fd4408c499b1b95a40e1b5a0"} Feb 18 00:56:11 crc kubenswrapper[4858]: I0218 00:56:11.942317 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jbz45"] Feb 18 00:56:11 crc kubenswrapper[4858]: W0218 00:56:11.949367 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cdda4b_3de4_484a_aa99_5ebde30e05d6.slice/crio-fca1fccde96f042ddb69aa3d7819a24c94f3aa24dbdbabcc6751de0e4f9e0283 WatchSource:0}: Error finding container fca1fccde96f042ddb69aa3d7819a24c94f3aa24dbdbabcc6751de0e4f9e0283: Status 404 returned error can't find the container with id fca1fccde96f042ddb69aa3d7819a24c94f3aa24dbdbabcc6751de0e4f9e0283 Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.171413 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-bnkqb" Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.363910 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-dns-svc\") pod \"69e75e4d-3e34-492b-8be5-15be3867f605\" (UID: \"69e75e4d-3e34-492b-8be5-15be3867f605\") " Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.363956 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-ovsdbserver-sb\") pod \"69e75e4d-3e34-492b-8be5-15be3867f605\" (UID: \"69e75e4d-3e34-492b-8be5-15be3867f605\") " Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.363980 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-ovsdbserver-nb\") pod \"69e75e4d-3e34-492b-8be5-15be3867f605\" (UID: \"69e75e4d-3e34-492b-8be5-15be3867f605\") " Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.364204 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-dns-swift-storage-0\") pod \"69e75e4d-3e34-492b-8be5-15be3867f605\" (UID: \"69e75e4d-3e34-492b-8be5-15be3867f605\") " Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.364236 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bl8w\" (UniqueName: \"kubernetes.io/projected/69e75e4d-3e34-492b-8be5-15be3867f605-kube-api-access-7bl8w\") pod \"69e75e4d-3e34-492b-8be5-15be3867f605\" (UID: \"69e75e4d-3e34-492b-8be5-15be3867f605\") " Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.364256 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-config\") pod \"69e75e4d-3e34-492b-8be5-15be3867f605\" (UID: \"69e75e4d-3e34-492b-8be5-15be3867f605\") " Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.383578 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69e75e4d-3e34-492b-8be5-15be3867f605-kube-api-access-7bl8w" (OuterVolumeSpecName: "kube-api-access-7bl8w") pod "69e75e4d-3e34-492b-8be5-15be3867f605" (UID: "69e75e4d-3e34-492b-8be5-15be3867f605"). InnerVolumeSpecName "kube-api-access-7bl8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.422763 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "69e75e4d-3e34-492b-8be5-15be3867f605" (UID: "69e75e4d-3e34-492b-8be5-15be3867f605"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.426863 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "69e75e4d-3e34-492b-8be5-15be3867f605" (UID: "69e75e4d-3e34-492b-8be5-15be3867f605"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.428811 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "69e75e4d-3e34-492b-8be5-15be3867f605" (UID: "69e75e4d-3e34-492b-8be5-15be3867f605"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.430867 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "69e75e4d-3e34-492b-8be5-15be3867f605" (UID: "69e75e4d-3e34-492b-8be5-15be3867f605"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.445239 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-config" (OuterVolumeSpecName: "config") pod "69e75e4d-3e34-492b-8be5-15be3867f605" (UID: "69e75e4d-3e34-492b-8be5-15be3867f605"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.467202 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.467232 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bl8w\" (UniqueName: \"kubernetes.io/projected/69e75e4d-3e34-492b-8be5-15be3867f605-kube-api-access-7bl8w\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.467243 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.467252 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.467261 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.467268 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69e75e4d-3e34-492b-8be5-15be3867f605-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.878628 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-bnkqb" event={"ID":"69e75e4d-3e34-492b-8be5-15be3867f605","Type":"ContainerDied","Data":"21830b11e2a4f56e4d17bf9ae1a88fd39e7e53f3b3de25271b72a12902db45f2"} Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.879002 4858 scope.go:117] "RemoveContainer" containerID="7ba75c2834264dc9531fe9a625497684c95afe88fd4408c499b1b95a40e1b5a0" Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.878666 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-bnkqb" Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.880558 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jbz45" event={"ID":"d0cdda4b-3de4-484a-aa99-5ebde30e05d6","Type":"ContainerStarted","Data":"a9eb4aeee1720de5e6366f161d92fc3dce15ed08e1ea3689bb92ce0608977bb4"} Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.880599 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jbz45" event={"ID":"d0cdda4b-3de4-484a-aa99-5ebde30e05d6","Type":"ContainerStarted","Data":"fca1fccde96f042ddb69aa3d7819a24c94f3aa24dbdbabcc6751de0e4f9e0283"} Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.919292 4858 scope.go:117] "RemoveContainer" containerID="30a2093867d200f77b2d6a55a663d0b6a05ad6cf73861e98b7913f250b810aa1" Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.919844 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-jbz45" podStartSLOduration=1.919827043 podStartE2EDuration="1.919827043s" podCreationTimestamp="2026-02-18 00:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:56:12.903375692 +0000 UTC m=+1326.209212424" watchObservedRunningTime="2026-02-18 00:56:12.919827043 +0000 UTC m=+1326.225663775" Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.931305 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-bnkqb"] Feb 18 00:56:12 crc kubenswrapper[4858]: I0218 00:56:12.940408 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-bnkqb"] Feb 18 00:56:13 crc kubenswrapper[4858]: I0218 00:56:13.443605 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69e75e4d-3e34-492b-8be5-15be3867f605" path="/var/lib/kubelet/pods/69e75e4d-3e34-492b-8be5-15be3867f605/volumes" Feb 18 00:56:13 crc kubenswrapper[4858]: I0218 00:56:13.892536 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6b4bc81-da80-4ab1-89b2-4ece6e825118","Type":"ContainerStarted","Data":"21e87866bde753575b4fe2bf181f94d3806a2750ee40249f40b1a12e51b0f078"} Feb 18 00:56:13 crc kubenswrapper[4858]: I0218 00:56:13.892682 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 00:56:13 crc kubenswrapper[4858]: I0218 00:56:13.921768 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.142999721 podStartE2EDuration="5.921748437s" podCreationTimestamp="2026-02-18 00:56:08 +0000 UTC" firstStartedPulling="2026-02-18 00:56:09.729436077 +0000 UTC m=+1323.035272809" lastFinishedPulling="2026-02-18 00:56:13.508184793 +0000 UTC m=+1326.814021525" observedRunningTime="2026-02-18 00:56:13.916614091 +0000 UTC m=+1327.222450823" watchObservedRunningTime="2026-02-18 00:56:13.921748437 +0000 UTC m=+1327.227585169" Feb 18 00:56:16 crc kubenswrapper[4858]: I0218 00:56:16.934898 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jbz45" event={"ID":"d0cdda4b-3de4-484a-aa99-5ebde30e05d6","Type":"ContainerDied","Data":"a9eb4aeee1720de5e6366f161d92fc3dce15ed08e1ea3689bb92ce0608977bb4"} Feb 18 00:56:16 crc kubenswrapper[4858]: I0218 00:56:16.935002 4858 generic.go:334] "Generic (PLEG): container finished" podID="d0cdda4b-3de4-484a-aa99-5ebde30e05d6" containerID="a9eb4aeee1720de5e6366f161d92fc3dce15ed08e1ea3689bb92ce0608977bb4" exitCode=0 Feb 18 00:56:18 crc kubenswrapper[4858]: I0218 00:56:18.184081 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 00:56:18 crc kubenswrapper[4858]: I0218 00:56:18.184313 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 00:56:18 crc kubenswrapper[4858]: I0218 00:56:18.457008 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jbz45" Feb 18 00:56:18 crc kubenswrapper[4858]: I0218 00:56:18.613288 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cdda4b-3de4-484a-aa99-5ebde30e05d6-combined-ca-bundle\") pod \"d0cdda4b-3de4-484a-aa99-5ebde30e05d6\" (UID: \"d0cdda4b-3de4-484a-aa99-5ebde30e05d6\") " Feb 18 00:56:18 crc kubenswrapper[4858]: I0218 00:56:18.613406 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkfkf\" (UniqueName: \"kubernetes.io/projected/d0cdda4b-3de4-484a-aa99-5ebde30e05d6-kube-api-access-zkfkf\") pod \"d0cdda4b-3de4-484a-aa99-5ebde30e05d6\" (UID: \"d0cdda4b-3de4-484a-aa99-5ebde30e05d6\") " Feb 18 00:56:18 crc kubenswrapper[4858]: I0218 00:56:18.613848 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cdda4b-3de4-484a-aa99-5ebde30e05d6-config-data\") pod \"d0cdda4b-3de4-484a-aa99-5ebde30e05d6\" (UID: \"d0cdda4b-3de4-484a-aa99-5ebde30e05d6\") " Feb 18 00:56:18 crc kubenswrapper[4858]: I0218 00:56:18.613956 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cdda4b-3de4-484a-aa99-5ebde30e05d6-scripts\") pod \"d0cdda4b-3de4-484a-aa99-5ebde30e05d6\" (UID: \"d0cdda4b-3de4-484a-aa99-5ebde30e05d6\") " Feb 18 00:56:18 crc kubenswrapper[4858]: I0218 00:56:18.619316 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0cdda4b-3de4-484a-aa99-5ebde30e05d6-scripts" (OuterVolumeSpecName: "scripts") pod "d0cdda4b-3de4-484a-aa99-5ebde30e05d6" (UID: "d0cdda4b-3de4-484a-aa99-5ebde30e05d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:18 crc kubenswrapper[4858]: I0218 00:56:18.620061 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0cdda4b-3de4-484a-aa99-5ebde30e05d6-kube-api-access-zkfkf" (OuterVolumeSpecName: "kube-api-access-zkfkf") pod "d0cdda4b-3de4-484a-aa99-5ebde30e05d6" (UID: "d0cdda4b-3de4-484a-aa99-5ebde30e05d6"). InnerVolumeSpecName "kube-api-access-zkfkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:56:18 crc kubenswrapper[4858]: I0218 00:56:18.644881 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0cdda4b-3de4-484a-aa99-5ebde30e05d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0cdda4b-3de4-484a-aa99-5ebde30e05d6" (UID: "d0cdda4b-3de4-484a-aa99-5ebde30e05d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:18 crc kubenswrapper[4858]: I0218 00:56:18.663101 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0cdda4b-3de4-484a-aa99-5ebde30e05d6-config-data" (OuterVolumeSpecName: "config-data") pod "d0cdda4b-3de4-484a-aa99-5ebde30e05d6" (UID: "d0cdda4b-3de4-484a-aa99-5ebde30e05d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:18 crc kubenswrapper[4858]: I0218 00:56:18.717036 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cdda4b-3de4-484a-aa99-5ebde30e05d6-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:18 crc kubenswrapper[4858]: I0218 00:56:18.717075 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cdda4b-3de4-484a-aa99-5ebde30e05d6-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:18 crc kubenswrapper[4858]: I0218 00:56:18.717088 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cdda4b-3de4-484a-aa99-5ebde30e05d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:18 crc kubenswrapper[4858]: I0218 00:56:18.717101 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkfkf\" (UniqueName: \"kubernetes.io/projected/d0cdda4b-3de4-484a-aa99-5ebde30e05d6-kube-api-access-zkfkf\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:18 crc kubenswrapper[4858]: I0218 00:56:18.963755 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jbz45" event={"ID":"d0cdda4b-3de4-484a-aa99-5ebde30e05d6","Type":"ContainerDied","Data":"fca1fccde96f042ddb69aa3d7819a24c94f3aa24dbdbabcc6751de0e4f9e0283"} Feb 18 00:56:18 crc kubenswrapper[4858]: I0218 00:56:18.963781 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jbz45" Feb 18 00:56:18 crc kubenswrapper[4858]: I0218 00:56:18.963800 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fca1fccde96f042ddb69aa3d7819a24c94f3aa24dbdbabcc6751de0e4f9e0283" Feb 18 00:56:19 crc kubenswrapper[4858]: I0218 00:56:19.185576 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:56:19 crc kubenswrapper[4858]: I0218 00:56:19.185863 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="cff65b88-5359-4a4c-a85c-d502b0958655" containerName="nova-scheduler-scheduler" containerID="cri-o://b7ccfe17b67f842a2c7787ee0076fd9dce772b920a2c51781a44ce67c5f45cbd" gracePeriod=30 Feb 18 00:56:19 crc kubenswrapper[4858]: I0218 00:56:19.207995 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bd970003-e8fd-4cb2-b95e-f85e79329ae1" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.230:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 00:56:19 crc kubenswrapper[4858]: I0218 00:56:19.208101 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bd970003-e8fd-4cb2-b95e-f85e79329ae1" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.230:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 00:56:19 crc kubenswrapper[4858]: I0218 00:56:19.213107 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:56:19 crc kubenswrapper[4858]: I0218 00:56:19.304871 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:56:19 crc kubenswrapper[4858]: I0218 00:56:19.305174 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a413ec36-cb52-4519-ac7c-e7f126b37892" containerName="nova-metadata-log" containerID="cri-o://dc79404e843e24d042d63cefb64d7d09c7c7de2f32bf65859526b0ba40559a87" gracePeriod=30 Feb 18 00:56:19 crc kubenswrapper[4858]: I0218 00:56:19.305247 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a413ec36-cb52-4519-ac7c-e7f126b37892" containerName="nova-metadata-metadata" containerID="cri-o://fa0e59608c186c1b983f7566b2193c589a983c1cf36cbf4d6a2ebd3bcdd92279" gracePeriod=30 Feb 18 00:56:19 crc kubenswrapper[4858]: I0218 00:56:19.979778 4858 generic.go:334] "Generic (PLEG): container finished" podID="a413ec36-cb52-4519-ac7c-e7f126b37892" containerID="dc79404e843e24d042d63cefb64d7d09c7c7de2f32bf65859526b0ba40559a87" exitCode=143 Feb 18 00:56:19 crc kubenswrapper[4858]: I0218 00:56:19.979894 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a413ec36-cb52-4519-ac7c-e7f126b37892","Type":"ContainerDied","Data":"dc79404e843e24d042d63cefb64d7d09c7c7de2f32bf65859526b0ba40559a87"} Feb 18 00:56:19 crc kubenswrapper[4858]: I0218 00:56:19.980023 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bd970003-e8fd-4cb2-b95e-f85e79329ae1" containerName="nova-api-log" containerID="cri-o://5a745410e175a1468404448b62a9e718ddde7fd04e61fbd4743b485b5757c6bc" gracePeriod=30 Feb 18 00:56:19 crc kubenswrapper[4858]: I0218 00:56:19.980513 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bd970003-e8fd-4cb2-b95e-f85e79329ae1" containerName="nova-api-api" containerID="cri-o://c6be8962f910de0c1c570d75d29254362fed140ac28a0cc96d1f36916b906e9d" gracePeriod=30 Feb 18 00:56:21 crc kubenswrapper[4858]: I0218 00:56:21.003763 4858 generic.go:334] "Generic (PLEG): container finished" podID="bd970003-e8fd-4cb2-b95e-f85e79329ae1" containerID="5a745410e175a1468404448b62a9e718ddde7fd04e61fbd4743b485b5757c6bc" exitCode=143 Feb 18 00:56:21 crc kubenswrapper[4858]: I0218 00:56:21.003847 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bd970003-e8fd-4cb2-b95e-f85e79329ae1","Type":"ContainerDied","Data":"5a745410e175a1468404448b62a9e718ddde7fd04e61fbd4743b485b5757c6bc"} Feb 18 00:56:21 crc kubenswrapper[4858]: I0218 00:56:21.013580 4858 generic.go:334] "Generic (PLEG): container finished" podID="cff65b88-5359-4a4c-a85c-d502b0958655" containerID="b7ccfe17b67f842a2c7787ee0076fd9dce772b920a2c51781a44ce67c5f45cbd" exitCode=0 Feb 18 00:56:21 crc kubenswrapper[4858]: I0218 00:56:21.013645 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cff65b88-5359-4a4c-a85c-d502b0958655","Type":"ContainerDied","Data":"b7ccfe17b67f842a2c7787ee0076fd9dce772b920a2c51781a44ce67c5f45cbd"} Feb 18 00:56:21 crc kubenswrapper[4858]: I0218 00:56:21.013686 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cff65b88-5359-4a4c-a85c-d502b0958655","Type":"ContainerDied","Data":"98aa3672486a5575b372aa59beeb623c187b89ef4eb01af5dd7d1bac944edf50"} Feb 18 00:56:21 crc kubenswrapper[4858]: I0218 00:56:21.013706 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98aa3672486a5575b372aa59beeb623c187b89ef4eb01af5dd7d1bac944edf50" Feb 18 00:56:21 crc kubenswrapper[4858]: I0218 00:56:21.065683 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 00:56:21 crc kubenswrapper[4858]: I0218 00:56:21.174222 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cff65b88-5359-4a4c-a85c-d502b0958655-combined-ca-bundle\") pod \"cff65b88-5359-4a4c-a85c-d502b0958655\" (UID: \"cff65b88-5359-4a4c-a85c-d502b0958655\") " Feb 18 00:56:21 crc kubenswrapper[4858]: I0218 00:56:21.174338 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc4s4\" (UniqueName: \"kubernetes.io/projected/cff65b88-5359-4a4c-a85c-d502b0958655-kube-api-access-mc4s4\") pod \"cff65b88-5359-4a4c-a85c-d502b0958655\" (UID: \"cff65b88-5359-4a4c-a85c-d502b0958655\") " Feb 18 00:56:21 crc kubenswrapper[4858]: I0218 00:56:21.174366 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cff65b88-5359-4a4c-a85c-d502b0958655-config-data\") pod \"cff65b88-5359-4a4c-a85c-d502b0958655\" (UID: \"cff65b88-5359-4a4c-a85c-d502b0958655\") " Feb 18 00:56:21 crc kubenswrapper[4858]: I0218 00:56:21.180433 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cff65b88-5359-4a4c-a85c-d502b0958655-kube-api-access-mc4s4" (OuterVolumeSpecName: "kube-api-access-mc4s4") pod "cff65b88-5359-4a4c-a85c-d502b0958655" (UID: "cff65b88-5359-4a4c-a85c-d502b0958655"). InnerVolumeSpecName "kube-api-access-mc4s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:56:21 crc kubenswrapper[4858]: I0218 00:56:21.218416 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cff65b88-5359-4a4c-a85c-d502b0958655-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cff65b88-5359-4a4c-a85c-d502b0958655" (UID: "cff65b88-5359-4a4c-a85c-d502b0958655"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:21 crc kubenswrapper[4858]: I0218 00:56:21.218440 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cff65b88-5359-4a4c-a85c-d502b0958655-config-data" (OuterVolumeSpecName: "config-data") pod "cff65b88-5359-4a4c-a85c-d502b0958655" (UID: "cff65b88-5359-4a4c-a85c-d502b0958655"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:21 crc kubenswrapper[4858]: I0218 00:56:21.277051 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc4s4\" (UniqueName: \"kubernetes.io/projected/cff65b88-5359-4a4c-a85c-d502b0958655-kube-api-access-mc4s4\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:21 crc kubenswrapper[4858]: I0218 00:56:21.277089 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cff65b88-5359-4a4c-a85c-d502b0958655-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:21 crc kubenswrapper[4858]: I0218 00:56:21.277102 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cff65b88-5359-4a4c-a85c-d502b0958655-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.026108 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.064932 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.086049 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.125237 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:56:22 crc kubenswrapper[4858]: E0218 00:56:22.125908 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cff65b88-5359-4a4c-a85c-d502b0958655" containerName="nova-scheduler-scheduler" Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.125940 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cff65b88-5359-4a4c-a85c-d502b0958655" containerName="nova-scheduler-scheduler" Feb 18 00:56:22 crc kubenswrapper[4858]: E0218 00:56:22.125971 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e75e4d-3e34-492b-8be5-15be3867f605" containerName="init" Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.125981 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e75e4d-3e34-492b-8be5-15be3867f605" containerName="init" Feb 18 00:56:22 crc kubenswrapper[4858]: E0218 00:56:22.126013 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0cdda4b-3de4-484a-aa99-5ebde30e05d6" containerName="nova-manage" Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.126021 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0cdda4b-3de4-484a-aa99-5ebde30e05d6" containerName="nova-manage" Feb 18 00:56:22 crc kubenswrapper[4858]: E0218 00:56:22.126034 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e75e4d-3e34-492b-8be5-15be3867f605" containerName="dnsmasq-dns" Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.126044 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e75e4d-3e34-492b-8be5-15be3867f605" containerName="dnsmasq-dns" Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.126274 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="cff65b88-5359-4a4c-a85c-d502b0958655" containerName="nova-scheduler-scheduler" Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.126304 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e75e4d-3e34-492b-8be5-15be3867f605" containerName="dnsmasq-dns" Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.126318 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0cdda4b-3de4-484a-aa99-5ebde30e05d6" containerName="nova-manage" Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.127215 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.130456 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.143183 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.195334 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr8hp\" (UniqueName: \"kubernetes.io/projected/16fae84a-ee9d-47b2-b83f-35aa53ac7da0-kube-api-access-rr8hp\") pod \"nova-scheduler-0\" (UID: \"16fae84a-ee9d-47b2-b83f-35aa53ac7da0\") " pod="openstack/nova-scheduler-0" Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.195409 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16fae84a-ee9d-47b2-b83f-35aa53ac7da0-config-data\") pod \"nova-scheduler-0\" (UID: \"16fae84a-ee9d-47b2-b83f-35aa53ac7da0\") " pod="openstack/nova-scheduler-0" Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.195831 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16fae84a-ee9d-47b2-b83f-35aa53ac7da0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"16fae84a-ee9d-47b2-b83f-35aa53ac7da0\") " pod="openstack/nova-scheduler-0" Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.298220 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16fae84a-ee9d-47b2-b83f-35aa53ac7da0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"16fae84a-ee9d-47b2-b83f-35aa53ac7da0\") " pod="openstack/nova-scheduler-0" Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.298468 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr8hp\" (UniqueName: \"kubernetes.io/projected/16fae84a-ee9d-47b2-b83f-35aa53ac7da0-kube-api-access-rr8hp\") pod \"nova-scheduler-0\" (UID: \"16fae84a-ee9d-47b2-b83f-35aa53ac7da0\") " pod="openstack/nova-scheduler-0" Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.298693 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16fae84a-ee9d-47b2-b83f-35aa53ac7da0-config-data\") pod \"nova-scheduler-0\" (UID: \"16fae84a-ee9d-47b2-b83f-35aa53ac7da0\") " pod="openstack/nova-scheduler-0" Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.303693 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16fae84a-ee9d-47b2-b83f-35aa53ac7da0-config-data\") pod \"nova-scheduler-0\" (UID: \"16fae84a-ee9d-47b2-b83f-35aa53ac7da0\") " pod="openstack/nova-scheduler-0" Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.317416 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16fae84a-ee9d-47b2-b83f-35aa53ac7da0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"16fae84a-ee9d-47b2-b83f-35aa53ac7da0\") " pod="openstack/nova-scheduler-0" Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.321912 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr8hp\" (UniqueName: \"kubernetes.io/projected/16fae84a-ee9d-47b2-b83f-35aa53ac7da0-kube-api-access-rr8hp\") pod \"nova-scheduler-0\" (UID: \"16fae84a-ee9d-47b2-b83f-35aa53ac7da0\") " pod="openstack/nova-scheduler-0" Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.454300 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a413ec36-cb52-4519-ac7c-e7f126b37892" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": read tcp 10.217.0.2:57072->10.217.0.225:8775: read: connection reset by peer" Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.454302 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a413ec36-cb52-4519-ac7c-e7f126b37892" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": read tcp 10.217.0.2:57086->10.217.0.225:8775: read: connection reset by peer" Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.454688 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 00:56:22 crc kubenswrapper[4858]: I0218 00:56:22.936211 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:56:22 crc kubenswrapper[4858]: W0218 00:56:22.937661 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16fae84a_ee9d_47b2_b83f_35aa53ac7da0.slice/crio-625e6287ea70ad1274cb8cac91a4b5226d421cad9e6a2fb200f7cfa1b2bed28d WatchSource:0}: Error finding container 625e6287ea70ad1274cb8cac91a4b5226d421cad9e6a2fb200f7cfa1b2bed28d: Status 404 returned error can't find the container with id 625e6287ea70ad1274cb8cac91a4b5226d421cad9e6a2fb200f7cfa1b2bed28d Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.029010 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.054296 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"16fae84a-ee9d-47b2-b83f-35aa53ac7da0","Type":"ContainerStarted","Data":"625e6287ea70ad1274cb8cac91a4b5226d421cad9e6a2fb200f7cfa1b2bed28d"} Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.076576 4858 generic.go:334] "Generic (PLEG): container finished" podID="a413ec36-cb52-4519-ac7c-e7f126b37892" containerID="fa0e59608c186c1b983f7566b2193c589a983c1cf36cbf4d6a2ebd3bcdd92279" exitCode=0 Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.076622 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a413ec36-cb52-4519-ac7c-e7f126b37892","Type":"ContainerDied","Data":"fa0e59608c186c1b983f7566b2193c589a983c1cf36cbf4d6a2ebd3bcdd92279"} Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.076651 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a413ec36-cb52-4519-ac7c-e7f126b37892","Type":"ContainerDied","Data":"670b6f30d8fe89cb5e070c36b8a930861191c5412e7a98baa34f4e19b003d610"} Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.076673 4858 scope.go:117] "RemoveContainer" containerID="fa0e59608c186c1b983f7566b2193c589a983c1cf36cbf4d6a2ebd3bcdd92279" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.076805 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.111932 4858 scope.go:117] "RemoveContainer" containerID="dc79404e843e24d042d63cefb64d7d09c7c7de2f32bf65859526b0ba40559a87" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.117550 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhggl\" (UniqueName: \"kubernetes.io/projected/a413ec36-cb52-4519-ac7c-e7f126b37892-kube-api-access-fhggl\") pod \"a413ec36-cb52-4519-ac7c-e7f126b37892\" (UID: \"a413ec36-cb52-4519-ac7c-e7f126b37892\") " Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.117679 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a413ec36-cb52-4519-ac7c-e7f126b37892-nova-metadata-tls-certs\") pod \"a413ec36-cb52-4519-ac7c-e7f126b37892\" (UID: \"a413ec36-cb52-4519-ac7c-e7f126b37892\") " Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.117730 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a413ec36-cb52-4519-ac7c-e7f126b37892-config-data\") pod \"a413ec36-cb52-4519-ac7c-e7f126b37892\" (UID: \"a413ec36-cb52-4519-ac7c-e7f126b37892\") " Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.117748 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a413ec36-cb52-4519-ac7c-e7f126b37892-combined-ca-bundle\") pod \"a413ec36-cb52-4519-ac7c-e7f126b37892\" (UID: \"a413ec36-cb52-4519-ac7c-e7f126b37892\") " Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.117821 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a413ec36-cb52-4519-ac7c-e7f126b37892-logs\") pod \"a413ec36-cb52-4519-ac7c-e7f126b37892\" (UID: \"a413ec36-cb52-4519-ac7c-e7f126b37892\") " Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.119666 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a413ec36-cb52-4519-ac7c-e7f126b37892-logs" (OuterVolumeSpecName: "logs") pod "a413ec36-cb52-4519-ac7c-e7f126b37892" (UID: "a413ec36-cb52-4519-ac7c-e7f126b37892"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.128702 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a413ec36-cb52-4519-ac7c-e7f126b37892-kube-api-access-fhggl" (OuterVolumeSpecName: "kube-api-access-fhggl") pod "a413ec36-cb52-4519-ac7c-e7f126b37892" (UID: "a413ec36-cb52-4519-ac7c-e7f126b37892"). InnerVolumeSpecName "kube-api-access-fhggl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.153920 4858 scope.go:117] "RemoveContainer" containerID="fa0e59608c186c1b983f7566b2193c589a983c1cf36cbf4d6a2ebd3bcdd92279" Feb 18 00:56:23 crc kubenswrapper[4858]: E0218 00:56:23.157385 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa0e59608c186c1b983f7566b2193c589a983c1cf36cbf4d6a2ebd3bcdd92279\": container with ID starting with fa0e59608c186c1b983f7566b2193c589a983c1cf36cbf4d6a2ebd3bcdd92279 not found: ID does not exist" containerID="fa0e59608c186c1b983f7566b2193c589a983c1cf36cbf4d6a2ebd3bcdd92279" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.157431 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa0e59608c186c1b983f7566b2193c589a983c1cf36cbf4d6a2ebd3bcdd92279"} err="failed to get container status \"fa0e59608c186c1b983f7566b2193c589a983c1cf36cbf4d6a2ebd3bcdd92279\": rpc error: code = NotFound desc = could not find container \"fa0e59608c186c1b983f7566b2193c589a983c1cf36cbf4d6a2ebd3bcdd92279\": container with ID starting with fa0e59608c186c1b983f7566b2193c589a983c1cf36cbf4d6a2ebd3bcdd92279 not found: ID does not exist" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.157472 4858 scope.go:117] "RemoveContainer" containerID="dc79404e843e24d042d63cefb64d7d09c7c7de2f32bf65859526b0ba40559a87" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.160946 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a413ec36-cb52-4519-ac7c-e7f126b37892-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a413ec36-cb52-4519-ac7c-e7f126b37892" (UID: "a413ec36-cb52-4519-ac7c-e7f126b37892"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:23 crc kubenswrapper[4858]: E0218 00:56:23.160991 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc79404e843e24d042d63cefb64d7d09c7c7de2f32bf65859526b0ba40559a87\": container with ID starting with dc79404e843e24d042d63cefb64d7d09c7c7de2f32bf65859526b0ba40559a87 not found: ID does not exist" containerID="dc79404e843e24d042d63cefb64d7d09c7c7de2f32bf65859526b0ba40559a87" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.161121 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc79404e843e24d042d63cefb64d7d09c7c7de2f32bf65859526b0ba40559a87"} err="failed to get container status \"dc79404e843e24d042d63cefb64d7d09c7c7de2f32bf65859526b0ba40559a87\": rpc error: code = NotFound desc = could not find container \"dc79404e843e24d042d63cefb64d7d09c7c7de2f32bf65859526b0ba40559a87\": container with ID starting with dc79404e843e24d042d63cefb64d7d09c7c7de2f32bf65859526b0ba40559a87 not found: ID does not exist" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.175565 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a413ec36-cb52-4519-ac7c-e7f126b37892-config-data" (OuterVolumeSpecName: "config-data") pod "a413ec36-cb52-4519-ac7c-e7f126b37892" (UID: "a413ec36-cb52-4519-ac7c-e7f126b37892"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.190408 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a413ec36-cb52-4519-ac7c-e7f126b37892-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "a413ec36-cb52-4519-ac7c-e7f126b37892" (UID: "a413ec36-cb52-4519-ac7c-e7f126b37892"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.219985 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a413ec36-cb52-4519-ac7c-e7f126b37892-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.220018 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhggl\" (UniqueName: \"kubernetes.io/projected/a413ec36-cb52-4519-ac7c-e7f126b37892-kube-api-access-fhggl\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.220032 4858 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a413ec36-cb52-4519-ac7c-e7f126b37892-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.220045 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a413ec36-cb52-4519-ac7c-e7f126b37892-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.220059 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a413ec36-cb52-4519-ac7c-e7f126b37892-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.411387 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.465369 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cff65b88-5359-4a4c-a85c-d502b0958655" path="/var/lib/kubelet/pods/cff65b88-5359-4a4c-a85c-d502b0958655/volumes" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.466541 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.466583 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:56:23 crc kubenswrapper[4858]: E0218 00:56:23.467039 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a413ec36-cb52-4519-ac7c-e7f126b37892" containerName="nova-metadata-metadata" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.467065 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a413ec36-cb52-4519-ac7c-e7f126b37892" containerName="nova-metadata-metadata" Feb 18 00:56:23 crc kubenswrapper[4858]: E0218 00:56:23.467101 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a413ec36-cb52-4519-ac7c-e7f126b37892" containerName="nova-metadata-log" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.467110 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a413ec36-cb52-4519-ac7c-e7f126b37892" containerName="nova-metadata-log" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.467419 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a413ec36-cb52-4519-ac7c-e7f126b37892" containerName="nova-metadata-metadata" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.467450 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a413ec36-cb52-4519-ac7c-e7f126b37892" containerName="nova-metadata-log" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.469309 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.470978 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.471640 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.472718 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.637272 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/024c4106-6664-48f0-a098-6638f4d9a9f5-config-data\") pod \"nova-metadata-0\" (UID: \"024c4106-6664-48f0-a098-6638f4d9a9f5\") " pod="openstack/nova-metadata-0" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.637403 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/024c4106-6664-48f0-a098-6638f4d9a9f5-logs\") pod \"nova-metadata-0\" (UID: \"024c4106-6664-48f0-a098-6638f4d9a9f5\") " pod="openstack/nova-metadata-0" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.637453 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/024c4106-6664-48f0-a098-6638f4d9a9f5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"024c4106-6664-48f0-a098-6638f4d9a9f5\") " pod="openstack/nova-metadata-0" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.637507 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/024c4106-6664-48f0-a098-6638f4d9a9f5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"024c4106-6664-48f0-a098-6638f4d9a9f5\") " pod="openstack/nova-metadata-0" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.637712 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4csm\" (UniqueName: \"kubernetes.io/projected/024c4106-6664-48f0-a098-6638f4d9a9f5-kube-api-access-s4csm\") pod \"nova-metadata-0\" (UID: \"024c4106-6664-48f0-a098-6638f4d9a9f5\") " pod="openstack/nova-metadata-0" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.739560 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/024c4106-6664-48f0-a098-6638f4d9a9f5-logs\") pod \"nova-metadata-0\" (UID: \"024c4106-6664-48f0-a098-6638f4d9a9f5\") " pod="openstack/nova-metadata-0" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.739652 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/024c4106-6664-48f0-a098-6638f4d9a9f5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"024c4106-6664-48f0-a098-6638f4d9a9f5\") " pod="openstack/nova-metadata-0" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.739719 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/024c4106-6664-48f0-a098-6638f4d9a9f5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"024c4106-6664-48f0-a098-6638f4d9a9f5\") " pod="openstack/nova-metadata-0" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.739787 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4csm\" (UniqueName: \"kubernetes.io/projected/024c4106-6664-48f0-a098-6638f4d9a9f5-kube-api-access-s4csm\") pod \"nova-metadata-0\" (UID: \"024c4106-6664-48f0-a098-6638f4d9a9f5\") " pod="openstack/nova-metadata-0" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.739833 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/024c4106-6664-48f0-a098-6638f4d9a9f5-config-data\") pod \"nova-metadata-0\" (UID: \"024c4106-6664-48f0-a098-6638f4d9a9f5\") " pod="openstack/nova-metadata-0" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.740739 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/024c4106-6664-48f0-a098-6638f4d9a9f5-logs\") pod \"nova-metadata-0\" (UID: \"024c4106-6664-48f0-a098-6638f4d9a9f5\") " pod="openstack/nova-metadata-0" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.744869 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/024c4106-6664-48f0-a098-6638f4d9a9f5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"024c4106-6664-48f0-a098-6638f4d9a9f5\") " pod="openstack/nova-metadata-0" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.748104 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/024c4106-6664-48f0-a098-6638f4d9a9f5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"024c4106-6664-48f0-a098-6638f4d9a9f5\") " pod="openstack/nova-metadata-0" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.752008 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/024c4106-6664-48f0-a098-6638f4d9a9f5-config-data\") pod \"nova-metadata-0\" (UID: \"024c4106-6664-48f0-a098-6638f4d9a9f5\") " pod="openstack/nova-metadata-0" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.763427 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4csm\" (UniqueName: \"kubernetes.io/projected/024c4106-6664-48f0-a098-6638f4d9a9f5-kube-api-access-s4csm\") pod \"nova-metadata-0\" (UID: \"024c4106-6664-48f0-a098-6638f4d9a9f5\") " pod="openstack/nova-metadata-0" Feb 18 00:56:23 crc kubenswrapper[4858]: I0218 00:56:23.823304 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 00:56:24 crc kubenswrapper[4858]: I0218 00:56:24.089381 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"16fae84a-ee9d-47b2-b83f-35aa53ac7da0","Type":"ContainerStarted","Data":"d54981e46d718d0ef592f347ebc6e91098c91308c6a89f603ccd3d48e70aea28"} Feb 18 00:56:24 crc kubenswrapper[4858]: I0218 00:56:24.120315 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.120295207 podStartE2EDuration="2.120295207s" podCreationTimestamp="2026-02-18 00:56:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:56:24.116026183 +0000 UTC m=+1337.421862925" watchObservedRunningTime="2026-02-18 00:56:24.120295207 +0000 UTC m=+1337.426131949" Feb 18 00:56:24 crc kubenswrapper[4858]: W0218 00:56:24.285488 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod024c4106_6664_48f0_a098_6638f4d9a9f5.slice/crio-0d2d5e3bd162f5e268fc1b9dfcacad6ffcf1ded357f3459c0e6ad8a22d1ce60c WatchSource:0}: Error finding container 0d2d5e3bd162f5e268fc1b9dfcacad6ffcf1ded357f3459c0e6ad8a22d1ce60c: Status 404 returned error can't find the container with id 0d2d5e3bd162f5e268fc1b9dfcacad6ffcf1ded357f3459c0e6ad8a22d1ce60c Feb 18 00:56:24 crc kubenswrapper[4858]: I0218 00:56:24.292269 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.016562 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.106259 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"024c4106-6664-48f0-a098-6638f4d9a9f5","Type":"ContainerStarted","Data":"b2105a4801f0dbea1312e12215c02d04556150094d4352312549c21f3e533f16"} Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.106299 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"024c4106-6664-48f0-a098-6638f4d9a9f5","Type":"ContainerStarted","Data":"b16b67189dcd3726284a95229136e7d3e1e41ecf22944dc526a4a9511f1d3d94"} Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.106309 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"024c4106-6664-48f0-a098-6638f4d9a9f5","Type":"ContainerStarted","Data":"0d2d5e3bd162f5e268fc1b9dfcacad6ffcf1ded357f3459c0e6ad8a22d1ce60c"} Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.112537 4858 generic.go:334] "Generic (PLEG): container finished" podID="bd970003-e8fd-4cb2-b95e-f85e79329ae1" containerID="c6be8962f910de0c1c570d75d29254362fed140ac28a0cc96d1f36916b906e9d" exitCode=0 Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.112613 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.112620 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bd970003-e8fd-4cb2-b95e-f85e79329ae1","Type":"ContainerDied","Data":"c6be8962f910de0c1c570d75d29254362fed140ac28a0cc96d1f36916b906e9d"} Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.113600 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bd970003-e8fd-4cb2-b95e-f85e79329ae1","Type":"ContainerDied","Data":"4e66fcadd6c76fc55999cfa2a7b393126f6a14f50b0e6dcb140038101fce4fb2"} Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.113619 4858 scope.go:117] "RemoveContainer" containerID="c6be8962f910de0c1c570d75d29254362fed140ac28a0cc96d1f36916b906e9d" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.132918 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.13288613 podStartE2EDuration="2.13288613s" podCreationTimestamp="2026-02-18 00:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:56:25.126864404 +0000 UTC m=+1338.432701136" watchObservedRunningTime="2026-02-18 00:56:25.13288613 +0000 UTC m=+1338.438722862" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.143961 4858 scope.go:117] "RemoveContainer" containerID="5a745410e175a1468404448b62a9e718ddde7fd04e61fbd4743b485b5757c6bc" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.160422 4858 scope.go:117] "RemoveContainer" containerID="c6be8962f910de0c1c570d75d29254362fed140ac28a0cc96d1f36916b906e9d" Feb 18 00:56:25 crc kubenswrapper[4858]: E0218 00:56:25.160718 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6be8962f910de0c1c570d75d29254362fed140ac28a0cc96d1f36916b906e9d\": container with ID starting with c6be8962f910de0c1c570d75d29254362fed140ac28a0cc96d1f36916b906e9d not found: ID does not exist" containerID="c6be8962f910de0c1c570d75d29254362fed140ac28a0cc96d1f36916b906e9d" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.160745 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6be8962f910de0c1c570d75d29254362fed140ac28a0cc96d1f36916b906e9d"} err="failed to get container status \"c6be8962f910de0c1c570d75d29254362fed140ac28a0cc96d1f36916b906e9d\": rpc error: code = NotFound desc = could not find container \"c6be8962f910de0c1c570d75d29254362fed140ac28a0cc96d1f36916b906e9d\": container with ID starting with c6be8962f910de0c1c570d75d29254362fed140ac28a0cc96d1f36916b906e9d not found: ID does not exist" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.160763 4858 scope.go:117] "RemoveContainer" containerID="5a745410e175a1468404448b62a9e718ddde7fd04e61fbd4743b485b5757c6bc" Feb 18 00:56:25 crc kubenswrapper[4858]: E0218 00:56:25.161071 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a745410e175a1468404448b62a9e718ddde7fd04e61fbd4743b485b5757c6bc\": container with ID starting with 5a745410e175a1468404448b62a9e718ddde7fd04e61fbd4743b485b5757c6bc not found: ID does not exist" containerID="5a745410e175a1468404448b62a9e718ddde7fd04e61fbd4743b485b5757c6bc" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.161132 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a745410e175a1468404448b62a9e718ddde7fd04e61fbd4743b485b5757c6bc"} err="failed to get container status \"5a745410e175a1468404448b62a9e718ddde7fd04e61fbd4743b485b5757c6bc\": rpc error: code = NotFound desc = could not find container \"5a745410e175a1468404448b62a9e718ddde7fd04e61fbd4743b485b5757c6bc\": container with ID starting with 5a745410e175a1468404448b62a9e718ddde7fd04e61fbd4743b485b5757c6bc not found: ID does not exist" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.170718 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd970003-e8fd-4cb2-b95e-f85e79329ae1-logs\") pod \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\" (UID: \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\") " Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.170829 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd970003-e8fd-4cb2-b95e-f85e79329ae1-config-data\") pod \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\" (UID: \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\") " Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.171055 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd970003-e8fd-4cb2-b95e-f85e79329ae1-logs" (OuterVolumeSpecName: "logs") pod "bd970003-e8fd-4cb2-b95e-f85e79329ae1" (UID: "bd970003-e8fd-4cb2-b95e-f85e79329ae1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.171348 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd970003-e8fd-4cb2-b95e-f85e79329ae1-internal-tls-certs\") pod \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\" (UID: \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\") " Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.171425 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xsh5\" (UniqueName: \"kubernetes.io/projected/bd970003-e8fd-4cb2-b95e-f85e79329ae1-kube-api-access-2xsh5\") pod \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\" (UID: \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\") " Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.171462 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd970003-e8fd-4cb2-b95e-f85e79329ae1-public-tls-certs\") pod \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\" (UID: \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\") " Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.171588 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd970003-e8fd-4cb2-b95e-f85e79329ae1-combined-ca-bundle\") pod \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\" (UID: \"bd970003-e8fd-4cb2-b95e-f85e79329ae1\") " Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.172153 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd970003-e8fd-4cb2-b95e-f85e79329ae1-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.174468 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd970003-e8fd-4cb2-b95e-f85e79329ae1-kube-api-access-2xsh5" (OuterVolumeSpecName: "kube-api-access-2xsh5") pod "bd970003-e8fd-4cb2-b95e-f85e79329ae1" (UID: "bd970003-e8fd-4cb2-b95e-f85e79329ae1"). InnerVolumeSpecName "kube-api-access-2xsh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.196521 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd970003-e8fd-4cb2-b95e-f85e79329ae1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd970003-e8fd-4cb2-b95e-f85e79329ae1" (UID: "bd970003-e8fd-4cb2-b95e-f85e79329ae1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.200315 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd970003-e8fd-4cb2-b95e-f85e79329ae1-config-data" (OuterVolumeSpecName: "config-data") pod "bd970003-e8fd-4cb2-b95e-f85e79329ae1" (UID: "bd970003-e8fd-4cb2-b95e-f85e79329ae1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.217778 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd970003-e8fd-4cb2-b95e-f85e79329ae1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bd970003-e8fd-4cb2-b95e-f85e79329ae1" (UID: "bd970003-e8fd-4cb2-b95e-f85e79329ae1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.219183 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd970003-e8fd-4cb2-b95e-f85e79329ae1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bd970003-e8fd-4cb2-b95e-f85e79329ae1" (UID: "bd970003-e8fd-4cb2-b95e-f85e79329ae1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.264809 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.264863 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.273996 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd970003-e8fd-4cb2-b95e-f85e79329ae1-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.274023 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd970003-e8fd-4cb2-b95e-f85e79329ae1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.274034 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xsh5\" (UniqueName: \"kubernetes.io/projected/bd970003-e8fd-4cb2-b95e-f85e79329ae1-kube-api-access-2xsh5\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.274042 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd970003-e8fd-4cb2-b95e-f85e79329ae1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.274053 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd970003-e8fd-4cb2-b95e-f85e79329ae1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.430759 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a413ec36-cb52-4519-ac7c-e7f126b37892" path="/var/lib/kubelet/pods/a413ec36-cb52-4519-ac7c-e7f126b37892/volumes" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.455869 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.470481 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.489848 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 00:56:25 crc kubenswrapper[4858]: E0218 00:56:25.490305 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd970003-e8fd-4cb2-b95e-f85e79329ae1" containerName="nova-api-api" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.490323 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd970003-e8fd-4cb2-b95e-f85e79329ae1" containerName="nova-api-api" Feb 18 00:56:25 crc kubenswrapper[4858]: E0218 00:56:25.490355 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd970003-e8fd-4cb2-b95e-f85e79329ae1" containerName="nova-api-log" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.490362 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd970003-e8fd-4cb2-b95e-f85e79329ae1" containerName="nova-api-log" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.490556 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd970003-e8fd-4cb2-b95e-f85e79329ae1" containerName="nova-api-log" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.490581 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd970003-e8fd-4cb2-b95e-f85e79329ae1" containerName="nova-api-api" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.491627 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.494288 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.494315 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.494455 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.502864 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.579830 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc1dcf66-88aa-4f05-89e7-b107f6a49ce6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fc1dcf66-88aa-4f05-89e7-b107f6a49ce6\") " pod="openstack/nova-api-0" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.579885 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc1dcf66-88aa-4f05-89e7-b107f6a49ce6-config-data\") pod \"nova-api-0\" (UID: \"fc1dcf66-88aa-4f05-89e7-b107f6a49ce6\") " pod="openstack/nova-api-0" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.580267 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc1dcf66-88aa-4f05-89e7-b107f6a49ce6-logs\") pod \"nova-api-0\" (UID: \"fc1dcf66-88aa-4f05-89e7-b107f6a49ce6\") " pod="openstack/nova-api-0" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.580335 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5z4t\" (UniqueName: \"kubernetes.io/projected/fc1dcf66-88aa-4f05-89e7-b107f6a49ce6-kube-api-access-b5z4t\") pod \"nova-api-0\" (UID: \"fc1dcf66-88aa-4f05-89e7-b107f6a49ce6\") " pod="openstack/nova-api-0" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.580358 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc1dcf66-88aa-4f05-89e7-b107f6a49ce6-public-tls-certs\") pod \"nova-api-0\" (UID: \"fc1dcf66-88aa-4f05-89e7-b107f6a49ce6\") " pod="openstack/nova-api-0" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.580482 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc1dcf66-88aa-4f05-89e7-b107f6a49ce6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fc1dcf66-88aa-4f05-89e7-b107f6a49ce6\") " pod="openstack/nova-api-0" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.682398 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc1dcf66-88aa-4f05-89e7-b107f6a49ce6-logs\") pod \"nova-api-0\" (UID: \"fc1dcf66-88aa-4f05-89e7-b107f6a49ce6\") " pod="openstack/nova-api-0" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.682459 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5z4t\" (UniqueName: \"kubernetes.io/projected/fc1dcf66-88aa-4f05-89e7-b107f6a49ce6-kube-api-access-b5z4t\") pod \"nova-api-0\" (UID: \"fc1dcf66-88aa-4f05-89e7-b107f6a49ce6\") " pod="openstack/nova-api-0" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.682510 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc1dcf66-88aa-4f05-89e7-b107f6a49ce6-public-tls-certs\") pod \"nova-api-0\" (UID: \"fc1dcf66-88aa-4f05-89e7-b107f6a49ce6\") " pod="openstack/nova-api-0" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.682610 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc1dcf66-88aa-4f05-89e7-b107f6a49ce6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fc1dcf66-88aa-4f05-89e7-b107f6a49ce6\") " pod="openstack/nova-api-0" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.682689 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc1dcf66-88aa-4f05-89e7-b107f6a49ce6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fc1dcf66-88aa-4f05-89e7-b107f6a49ce6\") " pod="openstack/nova-api-0" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.682725 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc1dcf66-88aa-4f05-89e7-b107f6a49ce6-config-data\") pod \"nova-api-0\" (UID: \"fc1dcf66-88aa-4f05-89e7-b107f6a49ce6\") " pod="openstack/nova-api-0" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.683058 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc1dcf66-88aa-4f05-89e7-b107f6a49ce6-logs\") pod \"nova-api-0\" (UID: \"fc1dcf66-88aa-4f05-89e7-b107f6a49ce6\") " pod="openstack/nova-api-0" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.687222 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc1dcf66-88aa-4f05-89e7-b107f6a49ce6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fc1dcf66-88aa-4f05-89e7-b107f6a49ce6\") " pod="openstack/nova-api-0" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.688388 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc1dcf66-88aa-4f05-89e7-b107f6a49ce6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fc1dcf66-88aa-4f05-89e7-b107f6a49ce6\") " pod="openstack/nova-api-0" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.689173 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc1dcf66-88aa-4f05-89e7-b107f6a49ce6-public-tls-certs\") pod \"nova-api-0\" (UID: \"fc1dcf66-88aa-4f05-89e7-b107f6a49ce6\") " pod="openstack/nova-api-0" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.693437 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc1dcf66-88aa-4f05-89e7-b107f6a49ce6-config-data\") pod \"nova-api-0\" (UID: \"fc1dcf66-88aa-4f05-89e7-b107f6a49ce6\") " pod="openstack/nova-api-0" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.703902 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5z4t\" (UniqueName: \"kubernetes.io/projected/fc1dcf66-88aa-4f05-89e7-b107f6a49ce6-kube-api-access-b5z4t\") pod \"nova-api-0\" (UID: \"fc1dcf66-88aa-4f05-89e7-b107f6a49ce6\") " pod="openstack/nova-api-0" Feb 18 00:56:25 crc kubenswrapper[4858]: I0218 00:56:25.813018 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:56:26 crc kubenswrapper[4858]: I0218 00:56:26.363347 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:56:27 crc kubenswrapper[4858]: I0218 00:56:27.135837 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc1dcf66-88aa-4f05-89e7-b107f6a49ce6","Type":"ContainerStarted","Data":"fe429609dd235aa5d1d8ca145b91d925ffa71e04a9696009e308427d4a0d0e09"} Feb 18 00:56:27 crc kubenswrapper[4858]: I0218 00:56:27.136221 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc1dcf66-88aa-4f05-89e7-b107f6a49ce6","Type":"ContainerStarted","Data":"3f89068fc3e2ce93225b8a0b90d034bbf50ccc13b88fce04997335d5dc3da774"} Feb 18 00:56:27 crc kubenswrapper[4858]: I0218 00:56:27.136240 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc1dcf66-88aa-4f05-89e7-b107f6a49ce6","Type":"ContainerStarted","Data":"115a57bc3bf1fe9bac97974c5be2bce5f0e086c583887c7ab304cbebcc7408ba"} Feb 18 00:56:27 crc kubenswrapper[4858]: I0218 00:56:27.169977 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.169956306 podStartE2EDuration="2.169956306s" podCreationTimestamp="2026-02-18 00:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:56:27.155625517 +0000 UTC m=+1340.461462279" watchObservedRunningTime="2026-02-18 00:56:27.169956306 +0000 UTC m=+1340.475793038" Feb 18 00:56:27 crc kubenswrapper[4858]: I0218 00:56:27.439826 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd970003-e8fd-4cb2-b95e-f85e79329ae1" path="/var/lib/kubelet/pods/bd970003-e8fd-4cb2-b95e-f85e79329ae1/volumes" Feb 18 00:56:27 crc kubenswrapper[4858]: I0218 00:56:27.455689 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 00:56:28 crc kubenswrapper[4858]: I0218 00:56:28.823713 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 00:56:28 crc kubenswrapper[4858]: I0218 00:56:28.824076 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 00:56:32 crc kubenswrapper[4858]: I0218 00:56:32.455553 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 00:56:32 crc kubenswrapper[4858]: I0218 00:56:32.504898 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 00:56:33 crc kubenswrapper[4858]: I0218 00:56:33.241164 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 00:56:33 crc kubenswrapper[4858]: I0218 00:56:33.825745 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 00:56:33 crc kubenswrapper[4858]: I0218 00:56:33.826205 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 00:56:34 crc kubenswrapper[4858]: I0218 00:56:34.834616 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="024c4106-6664-48f0-a098-6638f4d9a9f5" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.234:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 00:56:34 crc kubenswrapper[4858]: I0218 00:56:34.834669 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="024c4106-6664-48f0-a098-6638f4d9a9f5" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.234:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 00:56:35 crc kubenswrapper[4858]: I0218 00:56:35.814213 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 00:56:35 crc kubenswrapper[4858]: I0218 00:56:35.814599 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 00:56:36 crc kubenswrapper[4858]: I0218 00:56:36.827684 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fc1dcf66-88aa-4f05-89e7-b107f6a49ce6" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.235:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 00:56:36 crc kubenswrapper[4858]: I0218 00:56:36.828200 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fc1dcf66-88aa-4f05-89e7-b107f6a49ce6" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.235:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 00:56:39 crc kubenswrapper[4858]: I0218 00:56:39.217767 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 00:56:43 crc kubenswrapper[4858]: I0218 00:56:43.839086 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 00:56:43 crc kubenswrapper[4858]: I0218 00:56:43.840526 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 00:56:43 crc kubenswrapper[4858]: I0218 00:56:43.850834 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 00:56:44 crc kubenswrapper[4858]: I0218 00:56:44.344964 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 00:56:45 crc kubenswrapper[4858]: I0218 00:56:45.826259 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 00:56:45 crc kubenswrapper[4858]: I0218 00:56:45.827024 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 00:56:45 crc kubenswrapper[4858]: I0218 00:56:45.830225 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 00:56:45 crc kubenswrapper[4858]: I0218 00:56:45.838882 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 00:56:46 crc kubenswrapper[4858]: I0218 00:56:46.357901 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 00:56:46 crc kubenswrapper[4858]: I0218 00:56:46.370988 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 00:56:55 crc kubenswrapper[4858]: I0218 00:56:55.265810 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:56:55 crc kubenswrapper[4858]: I0218 00:56:55.266318 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:56:55 crc kubenswrapper[4858]: I0218 00:56:55.266368 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 00:56:55 crc kubenswrapper[4858]: I0218 00:56:55.267625 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3b23372d3ce28e5b83a0ba7f985899a4c07e0af615e7bbb60af93be8938ae620"} pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:56:55 crc kubenswrapper[4858]: I0218 00:56:55.267748 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" containerID="cri-o://3b23372d3ce28e5b83a0ba7f985899a4c07e0af615e7bbb60af93be8938ae620" gracePeriod=600 Feb 18 00:56:55 crc kubenswrapper[4858]: I0218 00:56:55.463169 4858 generic.go:334] "Generic (PLEG): container finished" podID="7172df49-6116-4968-a2b5-a1afb116568b" containerID="3b23372d3ce28e5b83a0ba7f985899a4c07e0af615e7bbb60af93be8938ae620" exitCode=0 Feb 18 00:56:55 crc kubenswrapper[4858]: I0218 00:56:55.463256 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerDied","Data":"3b23372d3ce28e5b83a0ba7f985899a4c07e0af615e7bbb60af93be8938ae620"} Feb 18 00:56:55 crc kubenswrapper[4858]: I0218 00:56:55.463606 4858 scope.go:117] "RemoveContainer" containerID="4583245cbf90bbb57e10dd728d6513f324cbca372738a19b656a2071d981e8c4" Feb 18 00:56:55 crc kubenswrapper[4858]: I0218 00:56:55.659884 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-bpmww"] Feb 18 00:56:55 crc kubenswrapper[4858]: I0218 00:56:55.672037 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-bpmww"] Feb 18 00:56:55 crc kubenswrapper[4858]: I0218 00:56:55.775859 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-h2mps"] Feb 18 00:56:55 crc kubenswrapper[4858]: I0218 00:56:55.777531 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-h2mps" Feb 18 00:56:55 crc kubenswrapper[4858]: I0218 00:56:55.781131 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 00:56:55 crc kubenswrapper[4858]: I0218 00:56:55.802785 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-h2mps"] Feb 18 00:56:55 crc kubenswrapper[4858]: I0218 00:56:55.880148 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9-certs\") pod \"cloudkitty-db-sync-h2mps\" (UID: \"8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9\") " pod="openstack/cloudkitty-db-sync-h2mps" Feb 18 00:56:55 crc kubenswrapper[4858]: I0218 00:56:55.880405 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9-combined-ca-bundle\") pod \"cloudkitty-db-sync-h2mps\" (UID: \"8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9\") " pod="openstack/cloudkitty-db-sync-h2mps" Feb 18 00:56:55 crc kubenswrapper[4858]: I0218 00:56:55.880479 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9-scripts\") pod \"cloudkitty-db-sync-h2mps\" (UID: \"8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9\") " pod="openstack/cloudkitty-db-sync-h2mps" Feb 18 00:56:55 crc kubenswrapper[4858]: I0218 00:56:55.880746 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t22t\" (UniqueName: \"kubernetes.io/projected/8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9-kube-api-access-2t22t\") pod \"cloudkitty-db-sync-h2mps\" (UID: \"8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9\") " pod="openstack/cloudkitty-db-sync-h2mps" Feb 18 00:56:55 crc kubenswrapper[4858]: I0218 00:56:55.881040 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9-config-data\") pod \"cloudkitty-db-sync-h2mps\" (UID: \"8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9\") " pod="openstack/cloudkitty-db-sync-h2mps" Feb 18 00:56:56 crc kubenswrapper[4858]: I0218 00:56:56.008093 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9-config-data\") pod \"cloudkitty-db-sync-h2mps\" (UID: \"8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9\") " pod="openstack/cloudkitty-db-sync-h2mps" Feb 18 00:56:56 crc kubenswrapper[4858]: I0218 00:56:56.008262 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9-certs\") pod \"cloudkitty-db-sync-h2mps\" (UID: \"8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9\") " pod="openstack/cloudkitty-db-sync-h2mps" Feb 18 00:56:56 crc kubenswrapper[4858]: I0218 00:56:56.008657 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9-combined-ca-bundle\") pod \"cloudkitty-db-sync-h2mps\" (UID: \"8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9\") " pod="openstack/cloudkitty-db-sync-h2mps" Feb 18 00:56:56 crc kubenswrapper[4858]: I0218 00:56:56.008753 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9-scripts\") pod \"cloudkitty-db-sync-h2mps\" (UID: \"8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9\") " pod="openstack/cloudkitty-db-sync-h2mps" Feb 18 00:56:56 crc kubenswrapper[4858]: I0218 00:56:56.008939 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t22t\" (UniqueName: \"kubernetes.io/projected/8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9-kube-api-access-2t22t\") pod \"cloudkitty-db-sync-h2mps\" (UID: \"8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9\") " pod="openstack/cloudkitty-db-sync-h2mps" Feb 18 00:56:56 crc kubenswrapper[4858]: I0218 00:56:56.015864 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9-combined-ca-bundle\") pod \"cloudkitty-db-sync-h2mps\" (UID: \"8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9\") " pod="openstack/cloudkitty-db-sync-h2mps" Feb 18 00:56:56 crc kubenswrapper[4858]: I0218 00:56:56.017029 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9-scripts\") pod \"cloudkitty-db-sync-h2mps\" (UID: \"8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9\") " pod="openstack/cloudkitty-db-sync-h2mps" Feb 18 00:56:56 crc kubenswrapper[4858]: I0218 00:56:56.018906 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9-certs\") pod \"cloudkitty-db-sync-h2mps\" (UID: \"8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9\") " pod="openstack/cloudkitty-db-sync-h2mps" Feb 18 00:56:56 crc kubenswrapper[4858]: I0218 00:56:56.021809 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9-config-data\") pod \"cloudkitty-db-sync-h2mps\" (UID: \"8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9\") " pod="openstack/cloudkitty-db-sync-h2mps" Feb 18 00:56:56 crc kubenswrapper[4858]: I0218 00:56:56.042243 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t22t\" (UniqueName: \"kubernetes.io/projected/8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9-kube-api-access-2t22t\") pod \"cloudkitty-db-sync-h2mps\" (UID: \"8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9\") " pod="openstack/cloudkitty-db-sync-h2mps" Feb 18 00:56:56 crc kubenswrapper[4858]: I0218 00:56:56.096128 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-h2mps" Feb 18 00:56:56 crc kubenswrapper[4858]: I0218 00:56:56.477776 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerStarted","Data":"7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab"} Feb 18 00:56:56 crc kubenswrapper[4858]: I0218 00:56:56.591375 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-h2mps"] Feb 18 00:56:56 crc kubenswrapper[4858]: W0218 00:56:56.599812 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d4b989d_a12e_4902_b4fa_c64e7d8e0fd9.slice/crio-deb4422e9e336318d1d66516783cf49e537fb3aee7b6f537df0057746f991750 WatchSource:0}: Error finding container deb4422e9e336318d1d66516783cf49e537fb3aee7b6f537df0057746f991750: Status 404 returned error can't find the container with id deb4422e9e336318d1d66516783cf49e537fb3aee7b6f537df0057746f991750 Feb 18 00:56:56 crc kubenswrapper[4858]: E0218 00:56:56.707938 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 00:56:56 crc kubenswrapper[4858]: E0218 00:56:56.708188 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 00:56:56 crc kubenswrapper[4858]: E0218 00:56:56.708327 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t22t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-h2mps_openstack(8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 00:56:56 crc kubenswrapper[4858]: E0218 00:56:56.709454 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 00:56:57 crc kubenswrapper[4858]: I0218 00:56:57.434356 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48a7b55c-92f4-41e7-b862-45eadd76013b" path="/var/lib/kubelet/pods/48a7b55c-92f4-41e7-b862-45eadd76013b/volumes" Feb 18 00:56:57 crc kubenswrapper[4858]: I0218 00:56:57.442239 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:56:57 crc kubenswrapper[4858]: I0218 00:56:57.442535 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d6b4bc81-da80-4ab1-89b2-4ece6e825118" containerName="ceilometer-central-agent" containerID="cri-o://43143daaacbcdf8a03d10aa77cb57594c7b0475131a50b08a4145af8289cc961" gracePeriod=30 Feb 18 00:56:57 crc kubenswrapper[4858]: I0218 00:56:57.442648 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d6b4bc81-da80-4ab1-89b2-4ece6e825118" containerName="proxy-httpd" containerID="cri-o://21e87866bde753575b4fe2bf181f94d3806a2750ee40249f40b1a12e51b0f078" gracePeriod=30 Feb 18 00:56:57 crc kubenswrapper[4858]: I0218 00:56:57.442730 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d6b4bc81-da80-4ab1-89b2-4ece6e825118" containerName="sg-core" containerID="cri-o://f41decab432703f0f263c81cd87dc523d7a5c2bdd3945711618e2defccf5445c" gracePeriod=30 Feb 18 00:56:57 crc kubenswrapper[4858]: I0218 00:56:57.442730 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d6b4bc81-da80-4ab1-89b2-4ece6e825118" containerName="ceilometer-notification-agent" containerID="cri-o://e1084691d2fe7f4dffb34a9824f7e81df9c3c8552314420e0b1c13f808ce6461" gracePeriod=30 Feb 18 00:56:57 crc kubenswrapper[4858]: I0218 00:56:57.494419 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-h2mps" event={"ID":"8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9","Type":"ContainerStarted","Data":"deb4422e9e336318d1d66516783cf49e537fb3aee7b6f537df0057746f991750"} Feb 18 00:56:57 crc kubenswrapper[4858]: E0218 00:56:57.495889 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 00:56:57 crc kubenswrapper[4858]: I0218 00:56:57.548450 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.245031 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.512665 4858 generic.go:334] "Generic (PLEG): container finished" podID="d6b4bc81-da80-4ab1-89b2-4ece6e825118" containerID="21e87866bde753575b4fe2bf181f94d3806a2750ee40249f40b1a12e51b0f078" exitCode=0 Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.512900 4858 generic.go:334] "Generic (PLEG): container finished" podID="d6b4bc81-da80-4ab1-89b2-4ece6e825118" containerID="f41decab432703f0f263c81cd87dc523d7a5c2bdd3945711618e2defccf5445c" exitCode=2 Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.512909 4858 generic.go:334] "Generic (PLEG): container finished" podID="d6b4bc81-da80-4ab1-89b2-4ece6e825118" containerID="e1084691d2fe7f4dffb34a9824f7e81df9c3c8552314420e0b1c13f808ce6461" exitCode=0 Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.512915 4858 generic.go:334] "Generic (PLEG): container finished" podID="d6b4bc81-da80-4ab1-89b2-4ece6e825118" containerID="43143daaacbcdf8a03d10aa77cb57594c7b0475131a50b08a4145af8289cc961" exitCode=0 Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.512746 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6b4bc81-da80-4ab1-89b2-4ece6e825118","Type":"ContainerDied","Data":"21e87866bde753575b4fe2bf181f94d3806a2750ee40249f40b1a12e51b0f078"} Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.513803 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6b4bc81-da80-4ab1-89b2-4ece6e825118","Type":"ContainerDied","Data":"f41decab432703f0f263c81cd87dc523d7a5c2bdd3945711618e2defccf5445c"} Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.513817 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6b4bc81-da80-4ab1-89b2-4ece6e825118","Type":"ContainerDied","Data":"e1084691d2fe7f4dffb34a9824f7e81df9c3c8552314420e0b1c13f808ce6461"} Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.513826 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6b4bc81-da80-4ab1-89b2-4ece6e825118","Type":"ContainerDied","Data":"43143daaacbcdf8a03d10aa77cb57594c7b0475131a50b08a4145af8289cc961"} Feb 18 00:56:58 crc kubenswrapper[4858]: E0218 00:56:58.515007 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.637776 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.659921 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-ceilometer-tls-certs\") pod \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.659967 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkstv\" (UniqueName: \"kubernetes.io/projected/d6b4bc81-da80-4ab1-89b2-4ece6e825118-kube-api-access-gkstv\") pod \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.660125 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-sg-core-conf-yaml\") pod \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.660167 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-scripts\") pod \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.660204 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-combined-ca-bundle\") pod \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.660280 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6b4bc81-da80-4ab1-89b2-4ece6e825118-log-httpd\") pod \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.660322 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6b4bc81-da80-4ab1-89b2-4ece6e825118-run-httpd\") pod \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.660345 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-config-data\") pod \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\" (UID: \"d6b4bc81-da80-4ab1-89b2-4ece6e825118\") " Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.662461 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6b4bc81-da80-4ab1-89b2-4ece6e825118-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d6b4bc81-da80-4ab1-89b2-4ece6e825118" (UID: "d6b4bc81-da80-4ab1-89b2-4ece6e825118"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.662722 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6b4bc81-da80-4ab1-89b2-4ece6e825118-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d6b4bc81-da80-4ab1-89b2-4ece6e825118" (UID: "d6b4bc81-da80-4ab1-89b2-4ece6e825118"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.671022 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6b4bc81-da80-4ab1-89b2-4ece6e825118-kube-api-access-gkstv" (OuterVolumeSpecName: "kube-api-access-gkstv") pod "d6b4bc81-da80-4ab1-89b2-4ece6e825118" (UID: "d6b4bc81-da80-4ab1-89b2-4ece6e825118"). InnerVolumeSpecName "kube-api-access-gkstv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.686740 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-scripts" (OuterVolumeSpecName: "scripts") pod "d6b4bc81-da80-4ab1-89b2-4ece6e825118" (UID: "d6b4bc81-da80-4ab1-89b2-4ece6e825118"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.730711 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d6b4bc81-da80-4ab1-89b2-4ece6e825118" (UID: "d6b4bc81-da80-4ab1-89b2-4ece6e825118"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.763461 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6b4bc81-da80-4ab1-89b2-4ece6e825118-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.763783 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d6b4bc81-da80-4ab1-89b2-4ece6e825118-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.763896 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkstv\" (UniqueName: \"kubernetes.io/projected/d6b4bc81-da80-4ab1-89b2-4ece6e825118-kube-api-access-gkstv\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.764015 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.764109 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.780322 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d6b4bc81-da80-4ab1-89b2-4ece6e825118" (UID: "d6b4bc81-da80-4ab1-89b2-4ece6e825118"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.797385 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6b4bc81-da80-4ab1-89b2-4ece6e825118" (UID: "d6b4bc81-da80-4ab1-89b2-4ece6e825118"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.823233 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-config-data" (OuterVolumeSpecName: "config-data") pod "d6b4bc81-da80-4ab1-89b2-4ece6e825118" (UID: "d6b4bc81-da80-4ab1-89b2-4ece6e825118"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.867146 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.867174 4858 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:58 crc kubenswrapper[4858]: I0218 00:56:58.867184 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b4bc81-da80-4ab1-89b2-4ece6e825118-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.533983 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d6b4bc81-da80-4ab1-89b2-4ece6e825118","Type":"ContainerDied","Data":"2dec41fc12bdf4db43619607a035b094b502e7e39949701bc6f833fefa87efdb"} Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.535900 4858 scope.go:117] "RemoveContainer" containerID="21e87866bde753575b4fe2bf181f94d3806a2750ee40249f40b1a12e51b0f078" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.536152 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.572682 4858 scope.go:117] "RemoveContainer" containerID="f41decab432703f0f263c81cd87dc523d7a5c2bdd3945711618e2defccf5445c" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.583576 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.600168 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.614778 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:56:59 crc kubenswrapper[4858]: E0218 00:56:59.615239 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6b4bc81-da80-4ab1-89b2-4ece6e825118" containerName="proxy-httpd" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.615257 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b4bc81-da80-4ab1-89b2-4ece6e825118" containerName="proxy-httpd" Feb 18 00:56:59 crc kubenswrapper[4858]: E0218 00:56:59.615273 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6b4bc81-da80-4ab1-89b2-4ece6e825118" containerName="sg-core" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.615280 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b4bc81-da80-4ab1-89b2-4ece6e825118" containerName="sg-core" Feb 18 00:56:59 crc kubenswrapper[4858]: E0218 00:56:59.615296 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6b4bc81-da80-4ab1-89b2-4ece6e825118" containerName="ceilometer-central-agent" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.615302 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b4bc81-da80-4ab1-89b2-4ece6e825118" containerName="ceilometer-central-agent" Feb 18 00:56:59 crc kubenswrapper[4858]: E0218 00:56:59.615331 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6b4bc81-da80-4ab1-89b2-4ece6e825118" containerName="ceilometer-notification-agent" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.615337 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b4bc81-da80-4ab1-89b2-4ece6e825118" containerName="ceilometer-notification-agent" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.615505 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6b4bc81-da80-4ab1-89b2-4ece6e825118" containerName="ceilometer-central-agent" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.615541 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6b4bc81-da80-4ab1-89b2-4ece6e825118" containerName="sg-core" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.615553 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6b4bc81-da80-4ab1-89b2-4ece6e825118" containerName="ceilometer-notification-agent" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.615572 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6b4bc81-da80-4ab1-89b2-4ece6e825118" containerName="proxy-httpd" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.617415 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.626542 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.633607 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.633827 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.663677 4858 scope.go:117] "RemoveContainer" containerID="e1084691d2fe7f4dffb34a9824f7e81df9c3c8552314420e0b1c13f808ce6461" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.663952 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.681192 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b28954c-8d35-4f43-a44b-307a56f6fff5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1b28954c-8d35-4f43-a44b-307a56f6fff5\") " pod="openstack/ceilometer-0" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.681243 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b28954c-8d35-4f43-a44b-307a56f6fff5-scripts\") pod \"ceilometer-0\" (UID: \"1b28954c-8d35-4f43-a44b-307a56f6fff5\") " pod="openstack/ceilometer-0" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.681282 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b28954c-8d35-4f43-a44b-307a56f6fff5-log-httpd\") pod \"ceilometer-0\" (UID: \"1b28954c-8d35-4f43-a44b-307a56f6fff5\") " pod="openstack/ceilometer-0" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.681364 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b28954c-8d35-4f43-a44b-307a56f6fff5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1b28954c-8d35-4f43-a44b-307a56f6fff5\") " pod="openstack/ceilometer-0" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.681655 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b28954c-8d35-4f43-a44b-307a56f6fff5-config-data\") pod \"ceilometer-0\" (UID: \"1b28954c-8d35-4f43-a44b-307a56f6fff5\") " pod="openstack/ceilometer-0" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.681900 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6qb4\" (UniqueName: \"kubernetes.io/projected/1b28954c-8d35-4f43-a44b-307a56f6fff5-kube-api-access-x6qb4\") pod \"ceilometer-0\" (UID: \"1b28954c-8d35-4f43-a44b-307a56f6fff5\") " pod="openstack/ceilometer-0" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.681958 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b28954c-8d35-4f43-a44b-307a56f6fff5-run-httpd\") pod \"ceilometer-0\" (UID: \"1b28954c-8d35-4f43-a44b-307a56f6fff5\") " pod="openstack/ceilometer-0" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.682040 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b28954c-8d35-4f43-a44b-307a56f6fff5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1b28954c-8d35-4f43-a44b-307a56f6fff5\") " pod="openstack/ceilometer-0" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.691668 4858 scope.go:117] "RemoveContainer" containerID="43143daaacbcdf8a03d10aa77cb57594c7b0475131a50b08a4145af8289cc961" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.784009 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b28954c-8d35-4f43-a44b-307a56f6fff5-scripts\") pod \"ceilometer-0\" (UID: \"1b28954c-8d35-4f43-a44b-307a56f6fff5\") " pod="openstack/ceilometer-0" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.784071 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b28954c-8d35-4f43-a44b-307a56f6fff5-log-httpd\") pod \"ceilometer-0\" (UID: \"1b28954c-8d35-4f43-a44b-307a56f6fff5\") " pod="openstack/ceilometer-0" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.784092 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b28954c-8d35-4f43-a44b-307a56f6fff5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1b28954c-8d35-4f43-a44b-307a56f6fff5\") " pod="openstack/ceilometer-0" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.784151 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b28954c-8d35-4f43-a44b-307a56f6fff5-config-data\") pod \"ceilometer-0\" (UID: \"1b28954c-8d35-4f43-a44b-307a56f6fff5\") " pod="openstack/ceilometer-0" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.784232 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6qb4\" (UniqueName: \"kubernetes.io/projected/1b28954c-8d35-4f43-a44b-307a56f6fff5-kube-api-access-x6qb4\") pod \"ceilometer-0\" (UID: \"1b28954c-8d35-4f43-a44b-307a56f6fff5\") " pod="openstack/ceilometer-0" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.784261 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b28954c-8d35-4f43-a44b-307a56f6fff5-run-httpd\") pod \"ceilometer-0\" (UID: \"1b28954c-8d35-4f43-a44b-307a56f6fff5\") " pod="openstack/ceilometer-0" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.784295 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b28954c-8d35-4f43-a44b-307a56f6fff5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1b28954c-8d35-4f43-a44b-307a56f6fff5\") " pod="openstack/ceilometer-0" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.784326 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b28954c-8d35-4f43-a44b-307a56f6fff5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1b28954c-8d35-4f43-a44b-307a56f6fff5\") " pod="openstack/ceilometer-0" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.785699 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b28954c-8d35-4f43-a44b-307a56f6fff5-log-httpd\") pod \"ceilometer-0\" (UID: \"1b28954c-8d35-4f43-a44b-307a56f6fff5\") " pod="openstack/ceilometer-0" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.786395 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b28954c-8d35-4f43-a44b-307a56f6fff5-run-httpd\") pod \"ceilometer-0\" (UID: \"1b28954c-8d35-4f43-a44b-307a56f6fff5\") " pod="openstack/ceilometer-0" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.789231 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b28954c-8d35-4f43-a44b-307a56f6fff5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1b28954c-8d35-4f43-a44b-307a56f6fff5\") " pod="openstack/ceilometer-0" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.789420 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b28954c-8d35-4f43-a44b-307a56f6fff5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1b28954c-8d35-4f43-a44b-307a56f6fff5\") " pod="openstack/ceilometer-0" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.789574 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b28954c-8d35-4f43-a44b-307a56f6fff5-scripts\") pod \"ceilometer-0\" (UID: \"1b28954c-8d35-4f43-a44b-307a56f6fff5\") " pod="openstack/ceilometer-0" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.790710 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b28954c-8d35-4f43-a44b-307a56f6fff5-config-data\") pod \"ceilometer-0\" (UID: \"1b28954c-8d35-4f43-a44b-307a56f6fff5\") " pod="openstack/ceilometer-0" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.797065 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b28954c-8d35-4f43-a44b-307a56f6fff5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1b28954c-8d35-4f43-a44b-307a56f6fff5\") " pod="openstack/ceilometer-0" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.802327 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6qb4\" (UniqueName: \"kubernetes.io/projected/1b28954c-8d35-4f43-a44b-307a56f6fff5-kube-api-access-x6qb4\") pod \"ceilometer-0\" (UID: \"1b28954c-8d35-4f43-a44b-307a56f6fff5\") " pod="openstack/ceilometer-0" Feb 18 00:56:59 crc kubenswrapper[4858]: I0218 00:56:59.989877 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:57:00 crc kubenswrapper[4858]: W0218 00:57:00.472392 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b28954c_8d35_4f43_a44b_307a56f6fff5.slice/crio-ac8b9835bd7c42f81d4710efbc6351c87b4ddc9800c08fc7431c6f41ae3c7a6b WatchSource:0}: Error finding container ac8b9835bd7c42f81d4710efbc6351c87b4ddc9800c08fc7431c6f41ae3c7a6b: Status 404 returned error can't find the container with id ac8b9835bd7c42f81d4710efbc6351c87b4ddc9800c08fc7431c6f41ae3c7a6b Feb 18 00:57:00 crc kubenswrapper[4858]: I0218 00:57:00.473274 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:57:00 crc kubenswrapper[4858]: I0218 00:57:00.553779 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b28954c-8d35-4f43-a44b-307a56f6fff5","Type":"ContainerStarted","Data":"ac8b9835bd7c42f81d4710efbc6351c87b4ddc9800c08fc7431c6f41ae3c7a6b"} Feb 18 00:57:00 crc kubenswrapper[4858]: E0218 00:57:00.596617 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 00:57:00 crc kubenswrapper[4858]: E0218 00:57:00.596675 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 00:57:00 crc kubenswrapper[4858]: E0218 00:57:00.596815 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n75h65dh56ch8chbh597h687h696h57bhdchcbhb6h9h648hb6h686h666h57ch557h55ch68ch76h686h5f7hb7hc6h68hdh67fh8bhd7hbcq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6qb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(1b28954c-8d35-4f43-a44b-307a56f6fff5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 00:57:01 crc kubenswrapper[4858]: I0218 00:57:01.435098 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6b4bc81-da80-4ab1-89b2-4ece6e825118" path="/var/lib/kubelet/pods/d6b4bc81-da80-4ab1-89b2-4ece6e825118/volumes" Feb 18 00:57:01 crc kubenswrapper[4858]: I0218 00:57:01.571623 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b28954c-8d35-4f43-a44b-307a56f6fff5","Type":"ContainerStarted","Data":"2a34e358735947bf12ab9fef73abb2f9e364ebbd1a43e547c623274fc585ea03"} Feb 18 00:57:02 crc kubenswrapper[4858]: I0218 00:57:02.392844 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="a53fffdd-3f92-4632-8391-cc89792884a8" containerName="rabbitmq" containerID="cri-o://63a35b3de0e3525ea596ea6f96026f20572c707cc07407fb8bf5ffa177e1d463" gracePeriod=604796 Feb 18 00:57:02 crc kubenswrapper[4858]: I0218 00:57:02.582175 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b28954c-8d35-4f43-a44b-307a56f6fff5","Type":"ContainerStarted","Data":"9fcf66a9e901e888bb0554a2aa5c0e3b875ac3f725899d7555eebe52724b960d"} Feb 18 00:57:02 crc kubenswrapper[4858]: I0218 00:57:02.811711 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="69e5abda-5efa-402f-b66c-320cf6ed1d99" containerName="rabbitmq" containerID="cri-o://dd97a3fd5687b349265977f642b6c2ffaa5da99c6169d50e25e4b36a805a6446" gracePeriod=604796 Feb 18 00:57:03 crc kubenswrapper[4858]: E0218 00:57:03.601746 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 00:57:04 crc kubenswrapper[4858]: I0218 00:57:04.621370 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b28954c-8d35-4f43-a44b-307a56f6fff5","Type":"ContainerStarted","Data":"eb094e258a5dc58b877fd7d8f2f3ec4db22b249fb340a3441fa8c292ce840d6d"} Feb 18 00:57:04 crc kubenswrapper[4858]: I0218 00:57:04.621944 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 00:57:04 crc kubenswrapper[4858]: E0218 00:57:04.624463 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 00:57:05 crc kubenswrapper[4858]: E0218 00:57:05.632724 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 00:57:08 crc kubenswrapper[4858]: I0218 00:57:08.658371 4858 generic.go:334] "Generic (PLEG): container finished" podID="a53fffdd-3f92-4632-8391-cc89792884a8" containerID="63a35b3de0e3525ea596ea6f96026f20572c707cc07407fb8bf5ffa177e1d463" exitCode=0 Feb 18 00:57:08 crc kubenswrapper[4858]: I0218 00:57:08.658559 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a53fffdd-3f92-4632-8391-cc89792884a8","Type":"ContainerDied","Data":"63a35b3de0e3525ea596ea6f96026f20572c707cc07407fb8bf5ffa177e1d463"} Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.119455 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.190262 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a53fffdd-3f92-4632-8391-cc89792884a8-erlang-cookie-secret\") pod \"a53fffdd-3f92-4632-8391-cc89792884a8\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.190663 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a53fffdd-3f92-4632-8391-cc89792884a8-rabbitmq-plugins\") pod \"a53fffdd-3f92-4632-8391-cc89792884a8\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.190701 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a53fffdd-3f92-4632-8391-cc89792884a8-config-data\") pod \"a53fffdd-3f92-4632-8391-cc89792884a8\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.190758 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a53fffdd-3f92-4632-8391-cc89792884a8-pod-info\") pod \"a53fffdd-3f92-4632-8391-cc89792884a8\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.190814 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a53fffdd-3f92-4632-8391-cc89792884a8-rabbitmq-tls\") pod \"a53fffdd-3f92-4632-8391-cc89792884a8\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.190872 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a53fffdd-3f92-4632-8391-cc89792884a8-plugins-conf\") pod \"a53fffdd-3f92-4632-8391-cc89792884a8\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.190900 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a53fffdd-3f92-4632-8391-cc89792884a8-rabbitmq-confd\") pod \"a53fffdd-3f92-4632-8391-cc89792884a8\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.190978 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk8gh\" (UniqueName: \"kubernetes.io/projected/a53fffdd-3f92-4632-8391-cc89792884a8-kube-api-access-tk8gh\") pod \"a53fffdd-3f92-4632-8391-cc89792884a8\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.191005 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a53fffdd-3f92-4632-8391-cc89792884a8-rabbitmq-erlang-cookie\") pod \"a53fffdd-3f92-4632-8391-cc89792884a8\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.194472 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a53fffdd-3f92-4632-8391-cc89792884a8-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a53fffdd-3f92-4632-8391-cc89792884a8" (UID: "a53fffdd-3f92-4632-8391-cc89792884a8"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.195603 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-182dabb6-acc3-402d-adbb-a1c53881a262\") pod \"a53fffdd-3f92-4632-8391-cc89792884a8\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.195657 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a53fffdd-3f92-4632-8391-cc89792884a8-server-conf\") pod \"a53fffdd-3f92-4632-8391-cc89792884a8\" (UID: \"a53fffdd-3f92-4632-8391-cc89792884a8\") " Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.196114 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a53fffdd-3f92-4632-8391-cc89792884a8-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a53fffdd-3f92-4632-8391-cc89792884a8" (UID: "a53fffdd-3f92-4632-8391-cc89792884a8"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.198309 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a53fffdd-3f92-4632-8391-cc89792884a8-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.198352 4858 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a53fffdd-3f92-4632-8391-cc89792884a8-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.198778 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a53fffdd-3f92-4632-8391-cc89792884a8-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a53fffdd-3f92-4632-8391-cc89792884a8" (UID: "a53fffdd-3f92-4632-8391-cc89792884a8"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.201742 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a53fffdd-3f92-4632-8391-cc89792884a8-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a53fffdd-3f92-4632-8391-cc89792884a8" (UID: "a53fffdd-3f92-4632-8391-cc89792884a8"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.207630 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a53fffdd-3f92-4632-8391-cc89792884a8-kube-api-access-tk8gh" (OuterVolumeSpecName: "kube-api-access-tk8gh") pod "a53fffdd-3f92-4632-8391-cc89792884a8" (UID: "a53fffdd-3f92-4632-8391-cc89792884a8"). InnerVolumeSpecName "kube-api-access-tk8gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.207724 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a53fffdd-3f92-4632-8391-cc89792884a8-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a53fffdd-3f92-4632-8391-cc89792884a8" (UID: "a53fffdd-3f92-4632-8391-cc89792884a8"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.217867 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a53fffdd-3f92-4632-8391-cc89792884a8-pod-info" (OuterVolumeSpecName: "pod-info") pod "a53fffdd-3f92-4632-8391-cc89792884a8" (UID: "a53fffdd-3f92-4632-8391-cc89792884a8"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.253013 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a53fffdd-3f92-4632-8391-cc89792884a8-config-data" (OuterVolumeSpecName: "config-data") pod "a53fffdd-3f92-4632-8391-cc89792884a8" (UID: "a53fffdd-3f92-4632-8391-cc89792884a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.264865 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-182dabb6-acc3-402d-adbb-a1c53881a262" (OuterVolumeSpecName: "persistence") pod "a53fffdd-3f92-4632-8391-cc89792884a8" (UID: "a53fffdd-3f92-4632-8391-cc89792884a8"). InnerVolumeSpecName "pvc-182dabb6-acc3-402d-adbb-a1c53881a262". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.285279 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a53fffdd-3f92-4632-8391-cc89792884a8-server-conf" (OuterVolumeSpecName: "server-conf") pod "a53fffdd-3f92-4632-8391-cc89792884a8" (UID: "a53fffdd-3f92-4632-8391-cc89792884a8"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.300363 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk8gh\" (UniqueName: \"kubernetes.io/projected/a53fffdd-3f92-4632-8391-cc89792884a8-kube-api-access-tk8gh\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.300422 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a53fffdd-3f92-4632-8391-cc89792884a8-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.300457 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-182dabb6-acc3-402d-adbb-a1c53881a262\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-182dabb6-acc3-402d-adbb-a1c53881a262\") on node \"crc\" " Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.300470 4858 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a53fffdd-3f92-4632-8391-cc89792884a8-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.300482 4858 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a53fffdd-3f92-4632-8391-cc89792884a8-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.300492 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a53fffdd-3f92-4632-8391-cc89792884a8-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.300513 4858 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a53fffdd-3f92-4632-8391-cc89792884a8-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.300523 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a53fffdd-3f92-4632-8391-cc89792884a8-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.354625 4858 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.355259 4858 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-182dabb6-acc3-402d-adbb-a1c53881a262" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-182dabb6-acc3-402d-adbb-a1c53881a262") on node "crc" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.359895 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a53fffdd-3f92-4632-8391-cc89792884a8-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a53fffdd-3f92-4632-8391-cc89792884a8" (UID: "a53fffdd-3f92-4632-8391-cc89792884a8"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.404337 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a53fffdd-3f92-4632-8391-cc89792884a8-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.405586 4858 reconciler_common.go:293] "Volume detached for volume \"pvc-182dabb6-acc3-402d-adbb-a1c53881a262\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-182dabb6-acc3-402d-adbb-a1c53881a262\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.490979 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.610162 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25xpr\" (UniqueName: \"kubernetes.io/projected/69e5abda-5efa-402f-b66c-320cf6ed1d99-kube-api-access-25xpr\") pod \"69e5abda-5efa-402f-b66c-320cf6ed1d99\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.611005 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a73777e5-acc5-4ded-8176-ec13d160539f\") pod \"69e5abda-5efa-402f-b66c-320cf6ed1d99\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.611074 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/69e5abda-5efa-402f-b66c-320cf6ed1d99-rabbitmq-confd\") pod \"69e5abda-5efa-402f-b66c-320cf6ed1d99\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.611115 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/69e5abda-5efa-402f-b66c-320cf6ed1d99-server-conf\") pod \"69e5abda-5efa-402f-b66c-320cf6ed1d99\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.611135 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/69e5abda-5efa-402f-b66c-320cf6ed1d99-plugins-conf\") pod \"69e5abda-5efa-402f-b66c-320cf6ed1d99\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.611198 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/69e5abda-5efa-402f-b66c-320cf6ed1d99-config-data\") pod \"69e5abda-5efa-402f-b66c-320cf6ed1d99\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.611223 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/69e5abda-5efa-402f-b66c-320cf6ed1d99-rabbitmq-erlang-cookie\") pod \"69e5abda-5efa-402f-b66c-320cf6ed1d99\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.611291 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/69e5abda-5efa-402f-b66c-320cf6ed1d99-rabbitmq-plugins\") pod \"69e5abda-5efa-402f-b66c-320cf6ed1d99\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.611330 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/69e5abda-5efa-402f-b66c-320cf6ed1d99-rabbitmq-tls\") pod \"69e5abda-5efa-402f-b66c-320cf6ed1d99\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.611365 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/69e5abda-5efa-402f-b66c-320cf6ed1d99-erlang-cookie-secret\") pod \"69e5abda-5efa-402f-b66c-320cf6ed1d99\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.611464 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/69e5abda-5efa-402f-b66c-320cf6ed1d99-pod-info\") pod \"69e5abda-5efa-402f-b66c-320cf6ed1d99\" (UID: \"69e5abda-5efa-402f-b66c-320cf6ed1d99\") " Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.615525 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69e5abda-5efa-402f-b66c-320cf6ed1d99-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "69e5abda-5efa-402f-b66c-320cf6ed1d99" (UID: "69e5abda-5efa-402f-b66c-320cf6ed1d99"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.615631 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69e5abda-5efa-402f-b66c-320cf6ed1d99-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "69e5abda-5efa-402f-b66c-320cf6ed1d99" (UID: "69e5abda-5efa-402f-b66c-320cf6ed1d99"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.616402 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/69e5abda-5efa-402f-b66c-320cf6ed1d99-pod-info" (OuterVolumeSpecName: "pod-info") pod "69e5abda-5efa-402f-b66c-320cf6ed1d99" (UID: "69e5abda-5efa-402f-b66c-320cf6ed1d99"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.616831 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69e5abda-5efa-402f-b66c-320cf6ed1d99-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "69e5abda-5efa-402f-b66c-320cf6ed1d99" (UID: "69e5abda-5efa-402f-b66c-320cf6ed1d99"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.621729 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69e5abda-5efa-402f-b66c-320cf6ed1d99-kube-api-access-25xpr" (OuterVolumeSpecName: "kube-api-access-25xpr") pod "69e5abda-5efa-402f-b66c-320cf6ed1d99" (UID: "69e5abda-5efa-402f-b66c-320cf6ed1d99"). InnerVolumeSpecName "kube-api-access-25xpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.629106 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69e5abda-5efa-402f-b66c-320cf6ed1d99-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "69e5abda-5efa-402f-b66c-320cf6ed1d99" (UID: "69e5abda-5efa-402f-b66c-320cf6ed1d99"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.631667 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69e5abda-5efa-402f-b66c-320cf6ed1d99-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "69e5abda-5efa-402f-b66c-320cf6ed1d99" (UID: "69e5abda-5efa-402f-b66c-320cf6ed1d99"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.650188 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a73777e5-acc5-4ded-8176-ec13d160539f" (OuterVolumeSpecName: "persistence") pod "69e5abda-5efa-402f-b66c-320cf6ed1d99" (UID: "69e5abda-5efa-402f-b66c-320cf6ed1d99"). InnerVolumeSpecName "pvc-a73777e5-acc5-4ded-8176-ec13d160539f". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.655438 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69e5abda-5efa-402f-b66c-320cf6ed1d99-config-data" (OuterVolumeSpecName: "config-data") pod "69e5abda-5efa-402f-b66c-320cf6ed1d99" (UID: "69e5abda-5efa-402f-b66c-320cf6ed1d99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.671839 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.671823 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a53fffdd-3f92-4632-8391-cc89792884a8","Type":"ContainerDied","Data":"4cf895fe2a11b21581bdc4078cb41798a6cb27b9b697859b399ed1db1d84cd97"} Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.672172 4858 scope.go:117] "RemoveContainer" containerID="63a35b3de0e3525ea596ea6f96026f20572c707cc07407fb8bf5ffa177e1d463" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.678938 4858 generic.go:334] "Generic (PLEG): container finished" podID="69e5abda-5efa-402f-b66c-320cf6ed1d99" containerID="dd97a3fd5687b349265977f642b6c2ffaa5da99c6169d50e25e4b36a805a6446" exitCode=0 Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.678979 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"69e5abda-5efa-402f-b66c-320cf6ed1d99","Type":"ContainerDied","Data":"dd97a3fd5687b349265977f642b6c2ffaa5da99c6169d50e25e4b36a805a6446"} Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.679006 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"69e5abda-5efa-402f-b66c-320cf6ed1d99","Type":"ContainerDied","Data":"430d024cc146802f5cbaa680bde2e27004eb97d4308e9a735089b6e85ceaa406"} Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.679021 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.707729 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69e5abda-5efa-402f-b66c-320cf6ed1d99-server-conf" (OuterVolumeSpecName: "server-conf") pod "69e5abda-5efa-402f-b66c-320cf6ed1d99" (UID: "69e5abda-5efa-402f-b66c-320cf6ed1d99"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.714038 4858 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/69e5abda-5efa-402f-b66c-320cf6ed1d99-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.714066 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25xpr\" (UniqueName: \"kubernetes.io/projected/69e5abda-5efa-402f-b66c-320cf6ed1d99-kube-api-access-25xpr\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.714093 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a73777e5-acc5-4ded-8176-ec13d160539f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a73777e5-acc5-4ded-8176-ec13d160539f\") on node \"crc\" " Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.714103 4858 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/69e5abda-5efa-402f-b66c-320cf6ed1d99-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.714112 4858 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/69e5abda-5efa-402f-b66c-320cf6ed1d99-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.714120 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/69e5abda-5efa-402f-b66c-320cf6ed1d99-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.714127 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/69e5abda-5efa-402f-b66c-320cf6ed1d99-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.714135 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/69e5abda-5efa-402f-b66c-320cf6ed1d99-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.714142 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/69e5abda-5efa-402f-b66c-320cf6ed1d99-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.714153 4858 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/69e5abda-5efa-402f-b66c-320cf6ed1d99-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.746450 4858 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.747147 4858 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a73777e5-acc5-4ded-8176-ec13d160539f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a73777e5-acc5-4ded-8176-ec13d160539f") on node "crc" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.785346 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69e5abda-5efa-402f-b66c-320cf6ed1d99-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "69e5abda-5efa-402f-b66c-320cf6ed1d99" (UID: "69e5abda-5efa-402f-b66c-320cf6ed1d99"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.815622 4858 reconciler_common.go:293] "Volume detached for volume \"pvc-a73777e5-acc5-4ded-8176-ec13d160539f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a73777e5-acc5-4ded-8176-ec13d160539f\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.815656 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/69e5abda-5efa-402f-b66c-320cf6ed1d99-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.843335 4858 scope.go:117] "RemoveContainer" containerID="d2851682fe6c25612d81583590c07588ee0a134c25c499865ea34610e1d5d805" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.850366 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.872376 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.875865 4858 scope.go:117] "RemoveContainer" containerID="dd97a3fd5687b349265977f642b6c2ffaa5da99c6169d50e25e4b36a805a6446" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.896622 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 00:57:09 crc kubenswrapper[4858]: E0218 00:57:09.897012 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e5abda-5efa-402f-b66c-320cf6ed1d99" containerName="setup-container" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.897023 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e5abda-5efa-402f-b66c-320cf6ed1d99" containerName="setup-container" Feb 18 00:57:09 crc kubenswrapper[4858]: E0218 00:57:09.897032 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e5abda-5efa-402f-b66c-320cf6ed1d99" containerName="rabbitmq" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.897038 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e5abda-5efa-402f-b66c-320cf6ed1d99" containerName="rabbitmq" Feb 18 00:57:09 crc kubenswrapper[4858]: E0218 00:57:09.897054 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a53fffdd-3f92-4632-8391-cc89792884a8" containerName="setup-container" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.897061 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a53fffdd-3f92-4632-8391-cc89792884a8" containerName="setup-container" Feb 18 00:57:09 crc kubenswrapper[4858]: E0218 00:57:09.897077 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a53fffdd-3f92-4632-8391-cc89792884a8" containerName="rabbitmq" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.897083 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a53fffdd-3f92-4632-8391-cc89792884a8" containerName="rabbitmq" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.897221 4858 scope.go:117] "RemoveContainer" containerID="40d4aaa45e6ba882acc9a8bd3e383cf4d4c226697170031564364f0f2f0a2c84" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.897257 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e5abda-5efa-402f-b66c-320cf6ed1d99" containerName="rabbitmq" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.897267 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a53fffdd-3f92-4632-8391-cc89792884a8" containerName="rabbitmq" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.898356 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.902092 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.902276 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.902436 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.902707 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.902878 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.903080 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-lhgmt" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.903210 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.942819 4858 scope.go:117] "RemoveContainer" containerID="dd97a3fd5687b349265977f642b6c2ffaa5da99c6169d50e25e4b36a805a6446" Feb 18 00:57:09 crc kubenswrapper[4858]: E0218 00:57:09.951770 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd97a3fd5687b349265977f642b6c2ffaa5da99c6169d50e25e4b36a805a6446\": container with ID starting with dd97a3fd5687b349265977f642b6c2ffaa5da99c6169d50e25e4b36a805a6446 not found: ID does not exist" containerID="dd97a3fd5687b349265977f642b6c2ffaa5da99c6169d50e25e4b36a805a6446" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.951985 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd97a3fd5687b349265977f642b6c2ffaa5da99c6169d50e25e4b36a805a6446"} err="failed to get container status \"dd97a3fd5687b349265977f642b6c2ffaa5da99c6169d50e25e4b36a805a6446\": rpc error: code = NotFound desc = could not find container \"dd97a3fd5687b349265977f642b6c2ffaa5da99c6169d50e25e4b36a805a6446\": container with ID starting with dd97a3fd5687b349265977f642b6c2ffaa5da99c6169d50e25e4b36a805a6446 not found: ID does not exist" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.952231 4858 scope.go:117] "RemoveContainer" containerID="40d4aaa45e6ba882acc9a8bd3e383cf4d4c226697170031564364f0f2f0a2c84" Feb 18 00:57:09 crc kubenswrapper[4858]: E0218 00:57:09.953696 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40d4aaa45e6ba882acc9a8bd3e383cf4d4c226697170031564364f0f2f0a2c84\": container with ID starting with 40d4aaa45e6ba882acc9a8bd3e383cf4d4c226697170031564364f0f2f0a2c84 not found: ID does not exist" containerID="40d4aaa45e6ba882acc9a8bd3e383cf4d4c226697170031564364f0f2f0a2c84" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.953752 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40d4aaa45e6ba882acc9a8bd3e383cf4d4c226697170031564364f0f2f0a2c84"} err="failed to get container status \"40d4aaa45e6ba882acc9a8bd3e383cf4d4c226697170031564364f0f2f0a2c84\": rpc error: code = NotFound desc = could not find container \"40d4aaa45e6ba882acc9a8bd3e383cf4d4c226697170031564364f0f2f0a2c84\": container with ID starting with 40d4aaa45e6ba882acc9a8bd3e383cf4d4c226697170031564364f0f2f0a2c84 not found: ID does not exist" Feb 18 00:57:09 crc kubenswrapper[4858]: I0218 00:57:09.960718 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.018566 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/930f0d86-3387-4a31-9e89-09f5b92c4ae4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.018628 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/930f0d86-3387-4a31-9e89-09f5b92c4ae4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.018645 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/930f0d86-3387-4a31-9e89-09f5b92c4ae4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.018675 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/930f0d86-3387-4a31-9e89-09f5b92c4ae4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.018705 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-182dabb6-acc3-402d-adbb-a1c53881a262\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-182dabb6-acc3-402d-adbb-a1c53881a262\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.018733 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f47j5\" (UniqueName: \"kubernetes.io/projected/930f0d86-3387-4a31-9e89-09f5b92c4ae4-kube-api-access-f47j5\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.018778 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/930f0d86-3387-4a31-9e89-09f5b92c4ae4-config-data\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.018800 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/930f0d86-3387-4a31-9e89-09f5b92c4ae4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.018826 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/930f0d86-3387-4a31-9e89-09f5b92c4ae4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.018901 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/930f0d86-3387-4a31-9e89-09f5b92c4ae4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.018959 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/930f0d86-3387-4a31-9e89-09f5b92c4ae4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.049766 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.083935 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.107361 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.109279 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.118802 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.118995 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.119115 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-5pwvb" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.120201 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/930f0d86-3387-4a31-9e89-09f5b92c4ae4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.120276 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/930f0d86-3387-4a31-9e89-09f5b92c4ae4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.120298 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/930f0d86-3387-4a31-9e89-09f5b92c4ae4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.120313 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/930f0d86-3387-4a31-9e89-09f5b92c4ae4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.120343 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/930f0d86-3387-4a31-9e89-09f5b92c4ae4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.120365 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-182dabb6-acc3-402d-adbb-a1c53881a262\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-182dabb6-acc3-402d-adbb-a1c53881a262\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.120390 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f47j5\" (UniqueName: \"kubernetes.io/projected/930f0d86-3387-4a31-9e89-09f5b92c4ae4-kube-api-access-f47j5\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.120425 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/930f0d86-3387-4a31-9e89-09f5b92c4ae4-config-data\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.120445 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/930f0d86-3387-4a31-9e89-09f5b92c4ae4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.120470 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/930f0d86-3387-4a31-9e89-09f5b92c4ae4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.120529 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/930f0d86-3387-4a31-9e89-09f5b92c4ae4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.123562 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/930f0d86-3387-4a31-9e89-09f5b92c4ae4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.124169 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/930f0d86-3387-4a31-9e89-09f5b92c4ae4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.125934 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.125968 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.126159 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.126245 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.126696 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/930f0d86-3387-4a31-9e89-09f5b92c4ae4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.127554 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/930f0d86-3387-4a31-9e89-09f5b92c4ae4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.136677 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.139515 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/930f0d86-3387-4a31-9e89-09f5b92c4ae4-config-data\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.164972 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/930f0d86-3387-4a31-9e89-09f5b92c4ae4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.166328 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/930f0d86-3387-4a31-9e89-09f5b92c4ae4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.174938 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/930f0d86-3387-4a31-9e89-09f5b92c4ae4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.175928 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f47j5\" (UniqueName: \"kubernetes.io/projected/930f0d86-3387-4a31-9e89-09f5b92c4ae4-kube-api-access-f47j5\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.176075 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/930f0d86-3387-4a31-9e89-09f5b92c4ae4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.199025 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.199065 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-182dabb6-acc3-402d-adbb-a1c53881a262\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-182dabb6-acc3-402d-adbb-a1c53881a262\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a7b9bb42ea0921459bd8f9dee1d37c625c88818f9ff056e9cdb682621212c886/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.222698 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/85c1c26c-0457-4e59-b0a5-f62699e06d2c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.222755 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/85c1c26c-0457-4e59-b0a5-f62699e06d2c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.222779 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk467\" (UniqueName: \"kubernetes.io/projected/85c1c26c-0457-4e59-b0a5-f62699e06d2c-kube-api-access-xk467\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.222834 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a73777e5-acc5-4ded-8176-ec13d160539f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a73777e5-acc5-4ded-8176-ec13d160539f\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.222854 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/85c1c26c-0457-4e59-b0a5-f62699e06d2c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.222898 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/85c1c26c-0457-4e59-b0a5-f62699e06d2c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.222925 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/85c1c26c-0457-4e59-b0a5-f62699e06d2c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.222948 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/85c1c26c-0457-4e59-b0a5-f62699e06d2c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.222973 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/85c1c26c-0457-4e59-b0a5-f62699e06d2c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.222998 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/85c1c26c-0457-4e59-b0a5-f62699e06d2c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.223051 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/85c1c26c-0457-4e59-b0a5-f62699e06d2c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.324687 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a73777e5-acc5-4ded-8176-ec13d160539f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a73777e5-acc5-4ded-8176-ec13d160539f\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.324731 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/85c1c26c-0457-4e59-b0a5-f62699e06d2c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.324780 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/85c1c26c-0457-4e59-b0a5-f62699e06d2c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.324808 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/85c1c26c-0457-4e59-b0a5-f62699e06d2c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.324833 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/85c1c26c-0457-4e59-b0a5-f62699e06d2c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.324858 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/85c1c26c-0457-4e59-b0a5-f62699e06d2c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.324884 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/85c1c26c-0457-4e59-b0a5-f62699e06d2c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.324936 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/85c1c26c-0457-4e59-b0a5-f62699e06d2c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.324965 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/85c1c26c-0457-4e59-b0a5-f62699e06d2c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.324987 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/85c1c26c-0457-4e59-b0a5-f62699e06d2c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.325005 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk467\" (UniqueName: \"kubernetes.io/projected/85c1c26c-0457-4e59-b0a5-f62699e06d2c-kube-api-access-xk467\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.326021 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/85c1c26c-0457-4e59-b0a5-f62699e06d2c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.326074 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/85c1c26c-0457-4e59-b0a5-f62699e06d2c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.327026 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/85c1c26c-0457-4e59-b0a5-f62699e06d2c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.327564 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/85c1c26c-0457-4e59-b0a5-f62699e06d2c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.327817 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/85c1c26c-0457-4e59-b0a5-f62699e06d2c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.329455 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/85c1c26c-0457-4e59-b0a5-f62699e06d2c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.329586 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/85c1c26c-0457-4e59-b0a5-f62699e06d2c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.330972 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.331008 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a73777e5-acc5-4ded-8176-ec13d160539f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a73777e5-acc5-4ded-8176-ec13d160539f\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8b6638bc3b4ec62d9a769affd0180f73c2510662f769e962e86871af5bab5490/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.331249 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/85c1c26c-0457-4e59-b0a5-f62699e06d2c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.334935 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/85c1c26c-0457-4e59-b0a5-f62699e06d2c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.346301 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk467\" (UniqueName: \"kubernetes.io/projected/85c1c26c-0457-4e59-b0a5-f62699e06d2c-kube-api-access-xk467\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.361323 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-182dabb6-acc3-402d-adbb-a1c53881a262\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-182dabb6-acc3-402d-adbb-a1c53881a262\") pod \"rabbitmq-server-0\" (UID: \"930f0d86-3387-4a31-9e89-09f5b92c4ae4\") " pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.392112 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a73777e5-acc5-4ded-8176-ec13d160539f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a73777e5-acc5-4ded-8176-ec13d160539f\") pod \"rabbitmq-cell1-server-0\" (UID: \"85c1c26c-0457-4e59-b0a5-f62699e06d2c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.456327 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.529126 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.681378 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-wdv2k"] Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.683321 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.686380 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.712277 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-wdv2k"] Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.733408 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-wdv2k\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.733461 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-wdv2k\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.733482 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-wdv2k\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.733793 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-wdv2k\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.733850 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msqp7\" (UniqueName: \"kubernetes.io/projected/7094c2c7-6438-4610-ba07-51eece98b1b1-kube-api-access-msqp7\") pod \"dnsmasq-dns-dbb88bf8c-wdv2k\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.733894 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-wdv2k\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.734100 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-config\") pod \"dnsmasq-dns-dbb88bf8c-wdv2k\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.835485 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-wdv2k\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.835548 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-wdv2k\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.835567 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-wdv2k\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.835653 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-wdv2k\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.835689 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msqp7\" (UniqueName: \"kubernetes.io/projected/7094c2c7-6438-4610-ba07-51eece98b1b1-kube-api-access-msqp7\") pod \"dnsmasq-dns-dbb88bf8c-wdv2k\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.835714 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-wdv2k\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.835774 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-config\") pod \"dnsmasq-dns-dbb88bf8c-wdv2k\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.836442 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-wdv2k\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.836550 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-wdv2k\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.836941 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-config\") pod \"dnsmasq-dns-dbb88bf8c-wdv2k\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.837005 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-wdv2k\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.837042 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-wdv2k\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.837138 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-wdv2k\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:10 crc kubenswrapper[4858]: I0218 00:57:10.854588 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msqp7\" (UniqueName: \"kubernetes.io/projected/7094c2c7-6438-4610-ba07-51eece98b1b1-kube-api-access-msqp7\") pod \"dnsmasq-dns-dbb88bf8c-wdv2k\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:11 crc kubenswrapper[4858]: I0218 00:57:11.003065 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 00:57:11 crc kubenswrapper[4858]: I0218 00:57:11.021201 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:11 crc kubenswrapper[4858]: I0218 00:57:11.144442 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 00:57:11 crc kubenswrapper[4858]: I0218 00:57:11.434944 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69e5abda-5efa-402f-b66c-320cf6ed1d99" path="/var/lib/kubelet/pods/69e5abda-5efa-402f-b66c-320cf6ed1d99/volumes" Feb 18 00:57:11 crc kubenswrapper[4858]: I0218 00:57:11.437172 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a53fffdd-3f92-4632-8391-cc89792884a8" path="/var/lib/kubelet/pods/a53fffdd-3f92-4632-8391-cc89792884a8/volumes" Feb 18 00:57:11 crc kubenswrapper[4858]: I0218 00:57:11.498060 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-wdv2k"] Feb 18 00:57:11 crc kubenswrapper[4858]: E0218 00:57:11.538834 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 00:57:11 crc kubenswrapper[4858]: E0218 00:57:11.538914 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 00:57:11 crc kubenswrapper[4858]: E0218 00:57:11.539073 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t22t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-h2mps_openstack(8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 00:57:11 crc kubenswrapper[4858]: E0218 00:57:11.540313 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 00:57:11 crc kubenswrapper[4858]: I0218 00:57:11.719745 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"930f0d86-3387-4a31-9e89-09f5b92c4ae4","Type":"ContainerStarted","Data":"6b8f87f9089274511b0f435aef678a679219a6030cec1d630c51b8b0d3d4b1b7"} Feb 18 00:57:11 crc kubenswrapper[4858]: I0218 00:57:11.722129 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" event={"ID":"7094c2c7-6438-4610-ba07-51eece98b1b1","Type":"ContainerStarted","Data":"83136449b75f1b85d8ae16eb00843363fc189b1872ea3791ba723e4c9bdfe57c"} Feb 18 00:57:11 crc kubenswrapper[4858]: I0218 00:57:11.723162 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"85c1c26c-0457-4e59-b0a5-f62699e06d2c","Type":"ContainerStarted","Data":"5190c35091007b7eb7e1e2af936c3b7526cd8a4391db99f28ac4b102587f00b8"} Feb 18 00:57:12 crc kubenswrapper[4858]: I0218 00:57:12.732191 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"930f0d86-3387-4a31-9e89-09f5b92c4ae4","Type":"ContainerStarted","Data":"656d677654c557d886eff029a66c1ef29740c99d67840a2c74904b170b9d200b"} Feb 18 00:57:12 crc kubenswrapper[4858]: I0218 00:57:12.734113 4858 generic.go:334] "Generic (PLEG): container finished" podID="7094c2c7-6438-4610-ba07-51eece98b1b1" containerID="9bf55dd703330553dc7bb5d8ce0b5039f623209fa63732e7d1190a17ca8da393" exitCode=0 Feb 18 00:57:12 crc kubenswrapper[4858]: I0218 00:57:12.734193 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" event={"ID":"7094c2c7-6438-4610-ba07-51eece98b1b1","Type":"ContainerDied","Data":"9bf55dd703330553dc7bb5d8ce0b5039f623209fa63732e7d1190a17ca8da393"} Feb 18 00:57:12 crc kubenswrapper[4858]: I0218 00:57:12.735521 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"85c1c26c-0457-4e59-b0a5-f62699e06d2c","Type":"ContainerStarted","Data":"9f0606d3bec23329f776b641861abb0244cc581a318fbf8bbc60b8a4940e0231"} Feb 18 00:57:13 crc kubenswrapper[4858]: I0218 00:57:13.747037 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" event={"ID":"7094c2c7-6438-4610-ba07-51eece98b1b1","Type":"ContainerStarted","Data":"e390749bfb81e06dffd5fd009c8f59a008ab0a8f476b1ef3ba61f28d29ade7b9"} Feb 18 00:57:13 crc kubenswrapper[4858]: I0218 00:57:13.795864 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" podStartSLOduration=3.7958419169999997 podStartE2EDuration="3.795841917s" podCreationTimestamp="2026-02-18 00:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:57:13.777430022 +0000 UTC m=+1387.083266804" watchObservedRunningTime="2026-02-18 00:57:13.795841917 +0000 UTC m=+1387.101678659" Feb 18 00:57:14 crc kubenswrapper[4858]: I0218 00:57:14.770858 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:19 crc kubenswrapper[4858]: I0218 00:57:19.435520 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 00:57:19 crc kubenswrapper[4858]: E0218 00:57:19.544890 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 00:57:19 crc kubenswrapper[4858]: E0218 00:57:19.545005 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 00:57:19 crc kubenswrapper[4858]: E0218 00:57:19.545258 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n75h65dh56ch8chbh597h687h696h57bhdchcbhb6h9h648hb6h686h666h57ch557h55ch68ch76h686h5f7hb7hc6h68hdh67fh8bhd7hbcq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6qb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(1b28954c-8d35-4f43-a44b-307a56f6fff5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 00:57:19 crc kubenswrapper[4858]: E0218 00:57:19.546733 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 00:57:19 crc kubenswrapper[4858]: E0218 00:57:19.823667 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.024817 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.121286 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-th7f4"] Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.121572 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" podUID="84e1bbfb-0a80-457f-9fd3-eeb803d1fe96" containerName="dnsmasq-dns" containerID="cri-o://2af501d01cf5590a259566819bec41f25f8238f3333029d30c1ec9bd26ecfc38" gracePeriod=10 Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.289388 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-8xfk8"] Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.292128 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.310533 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" podUID="84e1bbfb-0a80-457f-9fd3-eeb803d1fe96" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.229:5353: connect: connection refused" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.320099 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-8xfk8"] Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.400794 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d60d959f-1901-4dcb-b7fc-51a6523275a1-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-8xfk8\" (UID: \"d60d959f-1901-4dcb-b7fc-51a6523275a1\") " pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.400852 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d60d959f-1901-4dcb-b7fc-51a6523275a1-config\") pod \"dnsmasq-dns-85f64749dc-8xfk8\" (UID: \"d60d959f-1901-4dcb-b7fc-51a6523275a1\") " pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.400882 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d60d959f-1901-4dcb-b7fc-51a6523275a1-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-8xfk8\" (UID: \"d60d959f-1901-4dcb-b7fc-51a6523275a1\") " pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.400937 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d60d959f-1901-4dcb-b7fc-51a6523275a1-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-8xfk8\" (UID: \"d60d959f-1901-4dcb-b7fc-51a6523275a1\") " pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.400951 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz4w9\" (UniqueName: \"kubernetes.io/projected/d60d959f-1901-4dcb-b7fc-51a6523275a1-kube-api-access-dz4w9\") pod \"dnsmasq-dns-85f64749dc-8xfk8\" (UID: \"d60d959f-1901-4dcb-b7fc-51a6523275a1\") " pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.400979 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d60d959f-1901-4dcb-b7fc-51a6523275a1-dns-svc\") pod \"dnsmasq-dns-85f64749dc-8xfk8\" (UID: \"d60d959f-1901-4dcb-b7fc-51a6523275a1\") " pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.401012 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d60d959f-1901-4dcb-b7fc-51a6523275a1-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-8xfk8\" (UID: \"d60d959f-1901-4dcb-b7fc-51a6523275a1\") " pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.503127 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d60d959f-1901-4dcb-b7fc-51a6523275a1-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-8xfk8\" (UID: \"d60d959f-1901-4dcb-b7fc-51a6523275a1\") " pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.503173 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d60d959f-1901-4dcb-b7fc-51a6523275a1-config\") pod \"dnsmasq-dns-85f64749dc-8xfk8\" (UID: \"d60d959f-1901-4dcb-b7fc-51a6523275a1\") " pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.503200 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d60d959f-1901-4dcb-b7fc-51a6523275a1-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-8xfk8\" (UID: \"d60d959f-1901-4dcb-b7fc-51a6523275a1\") " pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.503251 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d60d959f-1901-4dcb-b7fc-51a6523275a1-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-8xfk8\" (UID: \"d60d959f-1901-4dcb-b7fc-51a6523275a1\") " pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.503267 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz4w9\" (UniqueName: \"kubernetes.io/projected/d60d959f-1901-4dcb-b7fc-51a6523275a1-kube-api-access-dz4w9\") pod \"dnsmasq-dns-85f64749dc-8xfk8\" (UID: \"d60d959f-1901-4dcb-b7fc-51a6523275a1\") " pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.503296 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d60d959f-1901-4dcb-b7fc-51a6523275a1-dns-svc\") pod \"dnsmasq-dns-85f64749dc-8xfk8\" (UID: \"d60d959f-1901-4dcb-b7fc-51a6523275a1\") " pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.503325 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d60d959f-1901-4dcb-b7fc-51a6523275a1-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-8xfk8\" (UID: \"d60d959f-1901-4dcb-b7fc-51a6523275a1\") " pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.504068 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d60d959f-1901-4dcb-b7fc-51a6523275a1-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-8xfk8\" (UID: \"d60d959f-1901-4dcb-b7fc-51a6523275a1\") " pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.504074 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d60d959f-1901-4dcb-b7fc-51a6523275a1-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-8xfk8\" (UID: \"d60d959f-1901-4dcb-b7fc-51a6523275a1\") " pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.504625 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d60d959f-1901-4dcb-b7fc-51a6523275a1-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-8xfk8\" (UID: \"d60d959f-1901-4dcb-b7fc-51a6523275a1\") " pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.504696 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d60d959f-1901-4dcb-b7fc-51a6523275a1-dns-svc\") pod \"dnsmasq-dns-85f64749dc-8xfk8\" (UID: \"d60d959f-1901-4dcb-b7fc-51a6523275a1\") " pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.504946 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d60d959f-1901-4dcb-b7fc-51a6523275a1-config\") pod \"dnsmasq-dns-85f64749dc-8xfk8\" (UID: \"d60d959f-1901-4dcb-b7fc-51a6523275a1\") " pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.505206 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d60d959f-1901-4dcb-b7fc-51a6523275a1-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-8xfk8\" (UID: \"d60d959f-1901-4dcb-b7fc-51a6523275a1\") " pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.546842 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz4w9\" (UniqueName: \"kubernetes.io/projected/d60d959f-1901-4dcb-b7fc-51a6523275a1-kube-api-access-dz4w9\") pod \"dnsmasq-dns-85f64749dc-8xfk8\" (UID: \"d60d959f-1901-4dcb-b7fc-51a6523275a1\") " pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.628624 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.758983 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.854226 4858 generic.go:334] "Generic (PLEG): container finished" podID="84e1bbfb-0a80-457f-9fd3-eeb803d1fe96" containerID="2af501d01cf5590a259566819bec41f25f8238f3333029d30c1ec9bd26ecfc38" exitCode=0 Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.854914 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" event={"ID":"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96","Type":"ContainerDied","Data":"2af501d01cf5590a259566819bec41f25f8238f3333029d30c1ec9bd26ecfc38"} Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.855064 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" event={"ID":"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96","Type":"ContainerDied","Data":"ee0477df752ba161dc005c82d1447e5abe3245be545b42a5c6ea9df436fbfaad"} Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.855140 4858 scope.go:117] "RemoveContainer" containerID="2af501d01cf5590a259566819bec41f25f8238f3333029d30c1ec9bd26ecfc38" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.855387 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-th7f4" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.904596 4858 scope.go:117] "RemoveContainer" containerID="14710cf2b11b5a3355041cd81bbc38dfd1e499869a801342b6e603b21dcef631" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.912051 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kspxc\" (UniqueName: \"kubernetes.io/projected/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-kube-api-access-kspxc\") pod \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\" (UID: \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\") " Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.912086 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-config\") pod \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\" (UID: \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\") " Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.912131 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-dns-svc\") pod \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\" (UID: \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\") " Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.912229 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-ovsdbserver-sb\") pod \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\" (UID: \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\") " Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.912276 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-dns-swift-storage-0\") pod \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\" (UID: \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\") " Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.912327 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-ovsdbserver-nb\") pod \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\" (UID: \"84e1bbfb-0a80-457f-9fd3-eeb803d1fe96\") " Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.920644 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-kube-api-access-kspxc" (OuterVolumeSpecName: "kube-api-access-kspxc") pod "84e1bbfb-0a80-457f-9fd3-eeb803d1fe96" (UID: "84e1bbfb-0a80-457f-9fd3-eeb803d1fe96"). InnerVolumeSpecName "kube-api-access-kspxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.949005 4858 scope.go:117] "RemoveContainer" containerID="2af501d01cf5590a259566819bec41f25f8238f3333029d30c1ec9bd26ecfc38" Feb 18 00:57:21 crc kubenswrapper[4858]: E0218 00:57:21.949403 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2af501d01cf5590a259566819bec41f25f8238f3333029d30c1ec9bd26ecfc38\": container with ID starting with 2af501d01cf5590a259566819bec41f25f8238f3333029d30c1ec9bd26ecfc38 not found: ID does not exist" containerID="2af501d01cf5590a259566819bec41f25f8238f3333029d30c1ec9bd26ecfc38" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.949427 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2af501d01cf5590a259566819bec41f25f8238f3333029d30c1ec9bd26ecfc38"} err="failed to get container status \"2af501d01cf5590a259566819bec41f25f8238f3333029d30c1ec9bd26ecfc38\": rpc error: code = NotFound desc = could not find container \"2af501d01cf5590a259566819bec41f25f8238f3333029d30c1ec9bd26ecfc38\": container with ID starting with 2af501d01cf5590a259566819bec41f25f8238f3333029d30c1ec9bd26ecfc38 not found: ID does not exist" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.949447 4858 scope.go:117] "RemoveContainer" containerID="14710cf2b11b5a3355041cd81bbc38dfd1e499869a801342b6e603b21dcef631" Feb 18 00:57:21 crc kubenswrapper[4858]: E0218 00:57:21.951049 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14710cf2b11b5a3355041cd81bbc38dfd1e499869a801342b6e603b21dcef631\": container with ID starting with 14710cf2b11b5a3355041cd81bbc38dfd1e499869a801342b6e603b21dcef631 not found: ID does not exist" containerID="14710cf2b11b5a3355041cd81bbc38dfd1e499869a801342b6e603b21dcef631" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.951079 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14710cf2b11b5a3355041cd81bbc38dfd1e499869a801342b6e603b21dcef631"} err="failed to get container status \"14710cf2b11b5a3355041cd81bbc38dfd1e499869a801342b6e603b21dcef631\": rpc error: code = NotFound desc = could not find container \"14710cf2b11b5a3355041cd81bbc38dfd1e499869a801342b6e603b21dcef631\": container with ID starting with 14710cf2b11b5a3355041cd81bbc38dfd1e499869a801342b6e603b21dcef631 not found: ID does not exist" Feb 18 00:57:21 crc kubenswrapper[4858]: I0218 00:57:21.993107 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "84e1bbfb-0a80-457f-9fd3-eeb803d1fe96" (UID: "84e1bbfb-0a80-457f-9fd3-eeb803d1fe96"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:57:22 crc kubenswrapper[4858]: I0218 00:57:22.006332 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-config" (OuterVolumeSpecName: "config") pod "84e1bbfb-0a80-457f-9fd3-eeb803d1fe96" (UID: "84e1bbfb-0a80-457f-9fd3-eeb803d1fe96"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:57:22 crc kubenswrapper[4858]: I0218 00:57:22.007173 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "84e1bbfb-0a80-457f-9fd3-eeb803d1fe96" (UID: "84e1bbfb-0a80-457f-9fd3-eeb803d1fe96"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:57:22 crc kubenswrapper[4858]: I0218 00:57:22.016139 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:22 crc kubenswrapper[4858]: I0218 00:57:22.016172 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:22 crc kubenswrapper[4858]: I0218 00:57:22.016182 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:22 crc kubenswrapper[4858]: I0218 00:57:22.016191 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kspxc\" (UniqueName: \"kubernetes.io/projected/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-kube-api-access-kspxc\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:22 crc kubenswrapper[4858]: I0218 00:57:22.021102 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "84e1bbfb-0a80-457f-9fd3-eeb803d1fe96" (UID: "84e1bbfb-0a80-457f-9fd3-eeb803d1fe96"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:57:22 crc kubenswrapper[4858]: I0218 00:57:22.067962 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "84e1bbfb-0a80-457f-9fd3-eeb803d1fe96" (UID: "84e1bbfb-0a80-457f-9fd3-eeb803d1fe96"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:57:22 crc kubenswrapper[4858]: I0218 00:57:22.118852 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:22 crc kubenswrapper[4858]: I0218 00:57:22.118893 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:22 crc kubenswrapper[4858]: I0218 00:57:22.145733 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-8xfk8"] Feb 18 00:57:22 crc kubenswrapper[4858]: I0218 00:57:22.214596 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-th7f4"] Feb 18 00:57:22 crc kubenswrapper[4858]: I0218 00:57:22.249351 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-th7f4"] Feb 18 00:57:22 crc kubenswrapper[4858]: E0218 00:57:22.420731 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 00:57:22 crc kubenswrapper[4858]: I0218 00:57:22.874453 4858 generic.go:334] "Generic (PLEG): container finished" podID="d60d959f-1901-4dcb-b7fc-51a6523275a1" containerID="25b73087a6f9a1a46236c40bfacf2a2e2407b04d591ee5a06cbe519e097c00bf" exitCode=0 Feb 18 00:57:22 crc kubenswrapper[4858]: I0218 00:57:22.874597 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" event={"ID":"d60d959f-1901-4dcb-b7fc-51a6523275a1","Type":"ContainerDied","Data":"25b73087a6f9a1a46236c40bfacf2a2e2407b04d591ee5a06cbe519e097c00bf"} Feb 18 00:57:22 crc kubenswrapper[4858]: I0218 00:57:22.874796 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" event={"ID":"d60d959f-1901-4dcb-b7fc-51a6523275a1","Type":"ContainerStarted","Data":"5909fe60fc0d29a96d9dd23ce6624ebde73388113289d855684ba38e8c21c5be"} Feb 18 00:57:23 crc kubenswrapper[4858]: I0218 00:57:23.436708 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84e1bbfb-0a80-457f-9fd3-eeb803d1fe96" path="/var/lib/kubelet/pods/84e1bbfb-0a80-457f-9fd3-eeb803d1fe96/volumes" Feb 18 00:57:23 crc kubenswrapper[4858]: I0218 00:57:23.885573 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" event={"ID":"d60d959f-1901-4dcb-b7fc-51a6523275a1","Type":"ContainerStarted","Data":"f7609d8d928c2c9da0e57abe7b0ac16be046d0308e7a82bdde36c930e02d0851"} Feb 18 00:57:23 crc kubenswrapper[4858]: I0218 00:57:23.885746 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" Feb 18 00:57:23 crc kubenswrapper[4858]: I0218 00:57:23.906817 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" podStartSLOduration=2.906793922 podStartE2EDuration="2.906793922s" podCreationTimestamp="2026-02-18 00:57:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:57:23.902267573 +0000 UTC m=+1397.208104315" watchObservedRunningTime="2026-02-18 00:57:23.906793922 +0000 UTC m=+1397.212630654" Feb 18 00:57:31 crc kubenswrapper[4858]: I0218 00:57:31.630726 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85f64749dc-8xfk8" Feb 18 00:57:31 crc kubenswrapper[4858]: I0218 00:57:31.708000 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-wdv2k"] Feb 18 00:57:31 crc kubenswrapper[4858]: I0218 00:57:31.708283 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" podUID="7094c2c7-6438-4610-ba07-51eece98b1b1" containerName="dnsmasq-dns" containerID="cri-o://e390749bfb81e06dffd5fd009c8f59a008ab0a8f476b1ef3ba61f28d29ade7b9" gracePeriod=10 Feb 18 00:57:31 crc kubenswrapper[4858]: I0218 00:57:31.980237 4858 generic.go:334] "Generic (PLEG): container finished" podID="7094c2c7-6438-4610-ba07-51eece98b1b1" containerID="e390749bfb81e06dffd5fd009c8f59a008ab0a8f476b1ef3ba61f28d29ade7b9" exitCode=0 Feb 18 00:57:31 crc kubenswrapper[4858]: I0218 00:57:31.980433 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" event={"ID":"7094c2c7-6438-4610-ba07-51eece98b1b1","Type":"ContainerDied","Data":"e390749bfb81e06dffd5fd009c8f59a008ab0a8f476b1ef3ba61f28d29ade7b9"} Feb 18 00:57:32 crc kubenswrapper[4858]: I0218 00:57:32.281208 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:32 crc kubenswrapper[4858]: I0218 00:57:32.461404 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-dns-svc\") pod \"7094c2c7-6438-4610-ba07-51eece98b1b1\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " Feb 18 00:57:32 crc kubenswrapper[4858]: I0218 00:57:32.461592 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-openstack-edpm-ipam\") pod \"7094c2c7-6438-4610-ba07-51eece98b1b1\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " Feb 18 00:57:32 crc kubenswrapper[4858]: I0218 00:57:32.461659 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-dns-swift-storage-0\") pod \"7094c2c7-6438-4610-ba07-51eece98b1b1\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " Feb 18 00:57:32 crc kubenswrapper[4858]: I0218 00:57:32.461712 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-ovsdbserver-sb\") pod \"7094c2c7-6438-4610-ba07-51eece98b1b1\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " Feb 18 00:57:32 crc kubenswrapper[4858]: I0218 00:57:32.461743 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-config\") pod \"7094c2c7-6438-4610-ba07-51eece98b1b1\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " Feb 18 00:57:32 crc kubenswrapper[4858]: I0218 00:57:32.461782 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-ovsdbserver-nb\") pod \"7094c2c7-6438-4610-ba07-51eece98b1b1\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " Feb 18 00:57:32 crc kubenswrapper[4858]: I0218 00:57:32.461805 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msqp7\" (UniqueName: \"kubernetes.io/projected/7094c2c7-6438-4610-ba07-51eece98b1b1-kube-api-access-msqp7\") pod \"7094c2c7-6438-4610-ba07-51eece98b1b1\" (UID: \"7094c2c7-6438-4610-ba07-51eece98b1b1\") " Feb 18 00:57:32 crc kubenswrapper[4858]: I0218 00:57:32.500101 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7094c2c7-6438-4610-ba07-51eece98b1b1-kube-api-access-msqp7" (OuterVolumeSpecName: "kube-api-access-msqp7") pod "7094c2c7-6438-4610-ba07-51eece98b1b1" (UID: "7094c2c7-6438-4610-ba07-51eece98b1b1"). InnerVolumeSpecName "kube-api-access-msqp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:57:32 crc kubenswrapper[4858]: I0218 00:57:32.520168 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7094c2c7-6438-4610-ba07-51eece98b1b1" (UID: "7094c2c7-6438-4610-ba07-51eece98b1b1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:57:32 crc kubenswrapper[4858]: I0218 00:57:32.526074 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7094c2c7-6438-4610-ba07-51eece98b1b1" (UID: "7094c2c7-6438-4610-ba07-51eece98b1b1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:57:32 crc kubenswrapper[4858]: I0218 00:57:32.535544 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "7094c2c7-6438-4610-ba07-51eece98b1b1" (UID: "7094c2c7-6438-4610-ba07-51eece98b1b1"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:57:32 crc kubenswrapper[4858]: I0218 00:57:32.539128 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7094c2c7-6438-4610-ba07-51eece98b1b1" (UID: "7094c2c7-6438-4610-ba07-51eece98b1b1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:57:32 crc kubenswrapper[4858]: I0218 00:57:32.554322 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7094c2c7-6438-4610-ba07-51eece98b1b1" (UID: "7094c2c7-6438-4610-ba07-51eece98b1b1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:57:32 crc kubenswrapper[4858]: I0218 00:57:32.554390 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-config" (OuterVolumeSpecName: "config") pod "7094c2c7-6438-4610-ba07-51eece98b1b1" (UID: "7094c2c7-6438-4610-ba07-51eece98b1b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:57:32 crc kubenswrapper[4858]: I0218 00:57:32.567856 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:32 crc kubenswrapper[4858]: I0218 00:57:32.569422 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msqp7\" (UniqueName: \"kubernetes.io/projected/7094c2c7-6438-4610-ba07-51eece98b1b1-kube-api-access-msqp7\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:32 crc kubenswrapper[4858]: I0218 00:57:32.569543 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:32 crc kubenswrapper[4858]: I0218 00:57:32.569608 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:32 crc kubenswrapper[4858]: I0218 00:57:32.569658 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:32 crc kubenswrapper[4858]: I0218 00:57:32.569710 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:32 crc kubenswrapper[4858]: I0218 00:57:32.569762 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7094c2c7-6438-4610-ba07-51eece98b1b1-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:32 crc kubenswrapper[4858]: I0218 00:57:32.996262 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" event={"ID":"7094c2c7-6438-4610-ba07-51eece98b1b1","Type":"ContainerDied","Data":"83136449b75f1b85d8ae16eb00843363fc189b1872ea3791ba723e4c9bdfe57c"} Feb 18 00:57:32 crc kubenswrapper[4858]: I0218 00:57:32.996663 4858 scope.go:117] "RemoveContainer" containerID="e390749bfb81e06dffd5fd009c8f59a008ab0a8f476b1ef3ba61f28d29ade7b9" Feb 18 00:57:32 crc kubenswrapper[4858]: I0218 00:57:32.996311 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-wdv2k" Feb 18 00:57:33 crc kubenswrapper[4858]: I0218 00:57:33.034485 4858 scope.go:117] "RemoveContainer" containerID="9bf55dd703330553dc7bb5d8ce0b5039f623209fa63732e7d1190a17ca8da393" Feb 18 00:57:33 crc kubenswrapper[4858]: I0218 00:57:33.073412 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-wdv2k"] Feb 18 00:57:33 crc kubenswrapper[4858]: I0218 00:57:33.091770 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-wdv2k"] Feb 18 00:57:33 crc kubenswrapper[4858]: I0218 00:57:33.435486 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7094c2c7-6438-4610-ba07-51eece98b1b1" path="/var/lib/kubelet/pods/7094c2c7-6438-4610-ba07-51eece98b1b1/volumes" Feb 18 00:57:35 crc kubenswrapper[4858]: E0218 00:57:35.424037 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 00:57:35 crc kubenswrapper[4858]: E0218 00:57:35.555008 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 00:57:35 crc kubenswrapper[4858]: E0218 00:57:35.555258 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 00:57:35 crc kubenswrapper[4858]: E0218 00:57:35.555363 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t22t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-h2mps_openstack(8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 00:57:35 crc kubenswrapper[4858]: E0218 00:57:35.556741 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 00:57:40 crc kubenswrapper[4858]: I0218 00:57:40.348194 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq"] Feb 18 00:57:40 crc kubenswrapper[4858]: E0218 00:57:40.348948 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7094c2c7-6438-4610-ba07-51eece98b1b1" containerName="dnsmasq-dns" Feb 18 00:57:40 crc kubenswrapper[4858]: I0218 00:57:40.348965 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7094c2c7-6438-4610-ba07-51eece98b1b1" containerName="dnsmasq-dns" Feb 18 00:57:40 crc kubenswrapper[4858]: E0218 00:57:40.349001 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7094c2c7-6438-4610-ba07-51eece98b1b1" containerName="init" Feb 18 00:57:40 crc kubenswrapper[4858]: I0218 00:57:40.349009 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7094c2c7-6438-4610-ba07-51eece98b1b1" containerName="init" Feb 18 00:57:40 crc kubenswrapper[4858]: E0218 00:57:40.349030 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e1bbfb-0a80-457f-9fd3-eeb803d1fe96" containerName="dnsmasq-dns" Feb 18 00:57:40 crc kubenswrapper[4858]: I0218 00:57:40.349038 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e1bbfb-0a80-457f-9fd3-eeb803d1fe96" containerName="dnsmasq-dns" Feb 18 00:57:40 crc kubenswrapper[4858]: E0218 00:57:40.349054 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e1bbfb-0a80-457f-9fd3-eeb803d1fe96" containerName="init" Feb 18 00:57:40 crc kubenswrapper[4858]: I0218 00:57:40.349062 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e1bbfb-0a80-457f-9fd3-eeb803d1fe96" containerName="init" Feb 18 00:57:40 crc kubenswrapper[4858]: I0218 00:57:40.349316 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7094c2c7-6438-4610-ba07-51eece98b1b1" containerName="dnsmasq-dns" Feb 18 00:57:40 crc kubenswrapper[4858]: I0218 00:57:40.349345 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="84e1bbfb-0a80-457f-9fd3-eeb803d1fe96" containerName="dnsmasq-dns" Feb 18 00:57:40 crc kubenswrapper[4858]: I0218 00:57:40.353544 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq" Feb 18 00:57:40 crc kubenswrapper[4858]: I0218 00:57:40.356684 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 00:57:40 crc kubenswrapper[4858]: I0218 00:57:40.359101 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ts27" Feb 18 00:57:40 crc kubenswrapper[4858]: I0218 00:57:40.359638 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 00:57:40 crc kubenswrapper[4858]: I0218 00:57:40.360253 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq"] Feb 18 00:57:40 crc kubenswrapper[4858]: I0218 00:57:40.360676 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 00:57:40 crc kubenswrapper[4858]: I0218 00:57:40.447205 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d1d2c63-5add-4004-90e1-54f46ac421e4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq\" (UID: \"0d1d2c63-5add-4004-90e1-54f46ac421e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq" Feb 18 00:57:40 crc kubenswrapper[4858]: I0218 00:57:40.447569 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d1d2c63-5add-4004-90e1-54f46ac421e4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq\" (UID: \"0d1d2c63-5add-4004-90e1-54f46ac421e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq" Feb 18 00:57:40 crc kubenswrapper[4858]: I0218 00:57:40.447604 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d1d2c63-5add-4004-90e1-54f46ac421e4-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq\" (UID: \"0d1d2c63-5add-4004-90e1-54f46ac421e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq" Feb 18 00:57:40 crc kubenswrapper[4858]: I0218 00:57:40.448119 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m47l\" (UniqueName: \"kubernetes.io/projected/0d1d2c63-5add-4004-90e1-54f46ac421e4-kube-api-access-5m47l\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq\" (UID: \"0d1d2c63-5add-4004-90e1-54f46ac421e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq" Feb 18 00:57:40 crc kubenswrapper[4858]: I0218 00:57:40.550612 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d1d2c63-5add-4004-90e1-54f46ac421e4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq\" (UID: \"0d1d2c63-5add-4004-90e1-54f46ac421e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq" Feb 18 00:57:40 crc kubenswrapper[4858]: I0218 00:57:40.550753 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d1d2c63-5add-4004-90e1-54f46ac421e4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq\" (UID: \"0d1d2c63-5add-4004-90e1-54f46ac421e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq" Feb 18 00:57:40 crc kubenswrapper[4858]: I0218 00:57:40.550814 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d1d2c63-5add-4004-90e1-54f46ac421e4-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq\" (UID: \"0d1d2c63-5add-4004-90e1-54f46ac421e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq" Feb 18 00:57:40 crc kubenswrapper[4858]: I0218 00:57:40.552811 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m47l\" (UniqueName: \"kubernetes.io/projected/0d1d2c63-5add-4004-90e1-54f46ac421e4-kube-api-access-5m47l\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq\" (UID: \"0d1d2c63-5add-4004-90e1-54f46ac421e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq" Feb 18 00:57:40 crc kubenswrapper[4858]: I0218 00:57:40.559905 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d1d2c63-5add-4004-90e1-54f46ac421e4-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq\" (UID: \"0d1d2c63-5add-4004-90e1-54f46ac421e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq" Feb 18 00:57:40 crc kubenswrapper[4858]: I0218 00:57:40.567387 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d1d2c63-5add-4004-90e1-54f46ac421e4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq\" (UID: \"0d1d2c63-5add-4004-90e1-54f46ac421e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq" Feb 18 00:57:40 crc kubenswrapper[4858]: I0218 00:57:40.568065 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d1d2c63-5add-4004-90e1-54f46ac421e4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq\" (UID: \"0d1d2c63-5add-4004-90e1-54f46ac421e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq" Feb 18 00:57:40 crc kubenswrapper[4858]: I0218 00:57:40.584215 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m47l\" (UniqueName: \"kubernetes.io/projected/0d1d2c63-5add-4004-90e1-54f46ac421e4-kube-api-access-5m47l\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq\" (UID: \"0d1d2c63-5add-4004-90e1-54f46ac421e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq" Feb 18 00:57:40 crc kubenswrapper[4858]: I0218 00:57:40.693118 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq" Feb 18 00:57:41 crc kubenswrapper[4858]: I0218 00:57:41.242274 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq"] Feb 18 00:57:42 crc kubenswrapper[4858]: I0218 00:57:42.122583 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq" event={"ID":"0d1d2c63-5add-4004-90e1-54f46ac421e4","Type":"ContainerStarted","Data":"44c2e3a965c435c26fd34bdec8d999f33be5ade6b7149bdf18047fab99974f36"} Feb 18 00:57:45 crc kubenswrapper[4858]: I0218 00:57:45.154843 4858 generic.go:334] "Generic (PLEG): container finished" podID="930f0d86-3387-4a31-9e89-09f5b92c4ae4" containerID="656d677654c557d886eff029a66c1ef29740c99d67840a2c74904b170b9d200b" exitCode=0 Feb 18 00:57:45 crc kubenswrapper[4858]: I0218 00:57:45.154960 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"930f0d86-3387-4a31-9e89-09f5b92c4ae4","Type":"ContainerDied","Data":"656d677654c557d886eff029a66c1ef29740c99d67840a2c74904b170b9d200b"} Feb 18 00:57:46 crc kubenswrapper[4858]: I0218 00:57:46.170491 4858 generic.go:334] "Generic (PLEG): container finished" podID="85c1c26c-0457-4e59-b0a5-f62699e06d2c" containerID="9f0606d3bec23329f776b641861abb0244cc581a318fbf8bbc60b8a4940e0231" exitCode=0 Feb 18 00:57:46 crc kubenswrapper[4858]: I0218 00:57:46.170568 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"85c1c26c-0457-4e59-b0a5-f62699e06d2c","Type":"ContainerDied","Data":"9f0606d3bec23329f776b641861abb0244cc581a318fbf8bbc60b8a4940e0231"} Feb 18 00:57:47 crc kubenswrapper[4858]: E0218 00:57:47.434137 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 00:57:47 crc kubenswrapper[4858]: E0218 00:57:47.572774 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 00:57:47 crc kubenswrapper[4858]: E0218 00:57:47.573103 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 00:57:47 crc kubenswrapper[4858]: E0218 00:57:47.573212 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n75h65dh56ch8chbh597h687h696h57bhdchcbhb6h9h648hb6h686h666h57ch557h55ch68ch76h686h5f7hb7hc6h68hdh67fh8bhd7hbcq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6qb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(1b28954c-8d35-4f43-a44b-307a56f6fff5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 00:57:47 crc kubenswrapper[4858]: E0218 00:57:47.574459 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 00:57:52 crc kubenswrapper[4858]: I0218 00:57:52.236305 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"85c1c26c-0457-4e59-b0a5-f62699e06d2c","Type":"ContainerStarted","Data":"4f16d49a6a4fa0b45d8b0522973b3e55b05c653e9917fb42942efaa795dcb2fe"} Feb 18 00:57:52 crc kubenswrapper[4858]: I0218 00:57:52.236978 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:57:52 crc kubenswrapper[4858]: I0218 00:57:52.239232 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"930f0d86-3387-4a31-9e89-09f5b92c4ae4","Type":"ContainerStarted","Data":"a5f7bc1df9dae32f7527ce6b9648f44815ec63916adb16415f439e39768e8bce"} Feb 18 00:57:52 crc kubenswrapper[4858]: I0218 00:57:52.239429 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 00:57:52 crc kubenswrapper[4858]: I0218 00:57:52.242407 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq" event={"ID":"0d1d2c63-5add-4004-90e1-54f46ac421e4","Type":"ContainerStarted","Data":"d81a8a9dd92fcd9d24f4b926161fc285499a22ddf83be6cd7917f32e3448a50e"} Feb 18 00:57:52 crc kubenswrapper[4858]: I0218 00:57:52.279450 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.279428764 podStartE2EDuration="42.279428764s" podCreationTimestamp="2026-02-18 00:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:57:52.270872528 +0000 UTC m=+1425.576709260" watchObservedRunningTime="2026-02-18 00:57:52.279428764 +0000 UTC m=+1425.585265496" Feb 18 00:57:52 crc kubenswrapper[4858]: I0218 00:57:52.314721 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq" podStartSLOduration=2.639352993 podStartE2EDuration="12.314701075s" podCreationTimestamp="2026-02-18 00:57:40 +0000 UTC" firstStartedPulling="2026-02-18 00:57:41.253614727 +0000 UTC m=+1414.559451459" lastFinishedPulling="2026-02-18 00:57:50.928962799 +0000 UTC m=+1424.234799541" observedRunningTime="2026-02-18 00:57:52.310863833 +0000 UTC m=+1425.616700575" watchObservedRunningTime="2026-02-18 00:57:52.314701075 +0000 UTC m=+1425.620537797" Feb 18 00:57:52 crc kubenswrapper[4858]: I0218 00:57:52.377909 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=43.37788659 podStartE2EDuration="43.37788659s" podCreationTimestamp="2026-02-18 00:57:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:57:52.362207222 +0000 UTC m=+1425.668043954" watchObservedRunningTime="2026-02-18 00:57:52.37788659 +0000 UTC m=+1425.683723332" Feb 18 00:57:59 crc kubenswrapper[4858]: E0218 00:57:59.422729 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 00:58:02 crc kubenswrapper[4858]: I0218 00:58:02.362754 4858 generic.go:334] "Generic (PLEG): container finished" podID="0d1d2c63-5add-4004-90e1-54f46ac421e4" containerID="d81a8a9dd92fcd9d24f4b926161fc285499a22ddf83be6cd7917f32e3448a50e" exitCode=0 Feb 18 00:58:02 crc kubenswrapper[4858]: I0218 00:58:02.363469 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq" event={"ID":"0d1d2c63-5add-4004-90e1-54f46ac421e4","Type":"ContainerDied","Data":"d81a8a9dd92fcd9d24f4b926161fc285499a22ddf83be6cd7917f32e3448a50e"} Feb 18 00:58:02 crc kubenswrapper[4858]: E0218 00:58:02.423463 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.029053 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.172424 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d1d2c63-5add-4004-90e1-54f46ac421e4-repo-setup-combined-ca-bundle\") pod \"0d1d2c63-5add-4004-90e1-54f46ac421e4\" (UID: \"0d1d2c63-5add-4004-90e1-54f46ac421e4\") " Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.172471 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d1d2c63-5add-4004-90e1-54f46ac421e4-ssh-key-openstack-edpm-ipam\") pod \"0d1d2c63-5add-4004-90e1-54f46ac421e4\" (UID: \"0d1d2c63-5add-4004-90e1-54f46ac421e4\") " Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.172705 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d1d2c63-5add-4004-90e1-54f46ac421e4-inventory\") pod \"0d1d2c63-5add-4004-90e1-54f46ac421e4\" (UID: \"0d1d2c63-5add-4004-90e1-54f46ac421e4\") " Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.172744 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m47l\" (UniqueName: \"kubernetes.io/projected/0d1d2c63-5add-4004-90e1-54f46ac421e4-kube-api-access-5m47l\") pod \"0d1d2c63-5add-4004-90e1-54f46ac421e4\" (UID: \"0d1d2c63-5add-4004-90e1-54f46ac421e4\") " Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.193622 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d1d2c63-5add-4004-90e1-54f46ac421e4-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "0d1d2c63-5add-4004-90e1-54f46ac421e4" (UID: "0d1d2c63-5add-4004-90e1-54f46ac421e4"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.193638 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d1d2c63-5add-4004-90e1-54f46ac421e4-kube-api-access-5m47l" (OuterVolumeSpecName: "kube-api-access-5m47l") pod "0d1d2c63-5add-4004-90e1-54f46ac421e4" (UID: "0d1d2c63-5add-4004-90e1-54f46ac421e4"). InnerVolumeSpecName "kube-api-access-5m47l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.217560 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d1d2c63-5add-4004-90e1-54f46ac421e4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0d1d2c63-5add-4004-90e1-54f46ac421e4" (UID: "0d1d2c63-5add-4004-90e1-54f46ac421e4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.220912 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d1d2c63-5add-4004-90e1-54f46ac421e4-inventory" (OuterVolumeSpecName: "inventory") pod "0d1d2c63-5add-4004-90e1-54f46ac421e4" (UID: "0d1d2c63-5add-4004-90e1-54f46ac421e4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.275272 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d1d2c63-5add-4004-90e1-54f46ac421e4-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.275433 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m47l\" (UniqueName: \"kubernetes.io/projected/0d1d2c63-5add-4004-90e1-54f46ac421e4-kube-api-access-5m47l\") on node \"crc\" DevicePath \"\"" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.275494 4858 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d1d2c63-5add-4004-90e1-54f46ac421e4-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.275583 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d1d2c63-5add-4004-90e1-54f46ac421e4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.388644 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq" event={"ID":"0d1d2c63-5add-4004-90e1-54f46ac421e4","Type":"ContainerDied","Data":"44c2e3a965c435c26fd34bdec8d999f33be5ade6b7149bdf18047fab99974f36"} Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.388678 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44c2e3a965c435c26fd34bdec8d999f33be5ade6b7149bdf18047fab99974f36" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.388732 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.494395 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-762ch"] Feb 18 00:58:04 crc kubenswrapper[4858]: E0218 00:58:04.494938 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d1d2c63-5add-4004-90e1-54f46ac421e4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.494960 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d1d2c63-5add-4004-90e1-54f46ac421e4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.495211 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d1d2c63-5add-4004-90e1-54f46ac421e4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.496143 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-762ch" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.498506 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ts27" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.498513 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.498953 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.501840 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.508716 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-762ch"] Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.682829 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15f5690b-3488-41ab-ba71-6aaf7f6b6bbf-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-762ch\" (UID: \"15f5690b-3488-41ab-ba71-6aaf7f6b6bbf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-762ch" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.682887 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkpwv\" (UniqueName: \"kubernetes.io/projected/15f5690b-3488-41ab-ba71-6aaf7f6b6bbf-kube-api-access-dkpwv\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-762ch\" (UID: \"15f5690b-3488-41ab-ba71-6aaf7f6b6bbf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-762ch" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.682936 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15f5690b-3488-41ab-ba71-6aaf7f6b6bbf-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-762ch\" (UID: \"15f5690b-3488-41ab-ba71-6aaf7f6b6bbf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-762ch" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.784812 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15f5690b-3488-41ab-ba71-6aaf7f6b6bbf-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-762ch\" (UID: \"15f5690b-3488-41ab-ba71-6aaf7f6b6bbf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-762ch" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.785080 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkpwv\" (UniqueName: \"kubernetes.io/projected/15f5690b-3488-41ab-ba71-6aaf7f6b6bbf-kube-api-access-dkpwv\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-762ch\" (UID: \"15f5690b-3488-41ab-ba71-6aaf7f6b6bbf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-762ch" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.785188 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15f5690b-3488-41ab-ba71-6aaf7f6b6bbf-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-762ch\" (UID: \"15f5690b-3488-41ab-ba71-6aaf7f6b6bbf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-762ch" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.789000 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15f5690b-3488-41ab-ba71-6aaf7f6b6bbf-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-762ch\" (UID: \"15f5690b-3488-41ab-ba71-6aaf7f6b6bbf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-762ch" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.797458 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15f5690b-3488-41ab-ba71-6aaf7f6b6bbf-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-762ch\" (UID: \"15f5690b-3488-41ab-ba71-6aaf7f6b6bbf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-762ch" Feb 18 00:58:04 crc kubenswrapper[4858]: I0218 00:58:04.819994 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkpwv\" (UniqueName: \"kubernetes.io/projected/15f5690b-3488-41ab-ba71-6aaf7f6b6bbf-kube-api-access-dkpwv\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-762ch\" (UID: \"15f5690b-3488-41ab-ba71-6aaf7f6b6bbf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-762ch" Feb 18 00:58:05 crc kubenswrapper[4858]: I0218 00:58:05.113912 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-762ch" Feb 18 00:58:05 crc kubenswrapper[4858]: I0218 00:58:05.684737 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-762ch"] Feb 18 00:58:06 crc kubenswrapper[4858]: I0218 00:58:06.412827 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-762ch" event={"ID":"15f5690b-3488-41ab-ba71-6aaf7f6b6bbf","Type":"ContainerStarted","Data":"6a57f0ffc21e31bfe4631d20d87a3cc9092ad3c9c7c5efb870defec726d64ef1"} Feb 18 00:58:06 crc kubenswrapper[4858]: I0218 00:58:06.413157 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-762ch" event={"ID":"15f5690b-3488-41ab-ba71-6aaf7f6b6bbf","Type":"ContainerStarted","Data":"852080f68578be4d95bf4d589f18857e66cc889c629d8f5e0b19da5402c8544f"} Feb 18 00:58:06 crc kubenswrapper[4858]: I0218 00:58:06.439869 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-762ch" podStartSLOduration=2.036925145 podStartE2EDuration="2.43985045s" podCreationTimestamp="2026-02-18 00:58:04 +0000 UTC" firstStartedPulling="2026-02-18 00:58:05.691909807 +0000 UTC m=+1438.997746539" lastFinishedPulling="2026-02-18 00:58:06.094835112 +0000 UTC m=+1439.400671844" observedRunningTime="2026-02-18 00:58:06.433733942 +0000 UTC m=+1439.739570684" watchObservedRunningTime="2026-02-18 00:58:06.43985045 +0000 UTC m=+1439.745687192" Feb 18 00:58:09 crc kubenswrapper[4858]: I0218 00:58:09.465495 4858 generic.go:334] "Generic (PLEG): container finished" podID="15f5690b-3488-41ab-ba71-6aaf7f6b6bbf" containerID="6a57f0ffc21e31bfe4631d20d87a3cc9092ad3c9c7c5efb870defec726d64ef1" exitCode=0 Feb 18 00:58:09 crc kubenswrapper[4858]: I0218 00:58:09.465576 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-762ch" event={"ID":"15f5690b-3488-41ab-ba71-6aaf7f6b6bbf","Type":"ContainerDied","Data":"6a57f0ffc21e31bfe4631d20d87a3cc9092ad3c9c7c5efb870defec726d64ef1"} Feb 18 00:58:10 crc kubenswrapper[4858]: E0218 00:58:10.422362 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 00:58:10 crc kubenswrapper[4858]: I0218 00:58:10.460648 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:58:10 crc kubenswrapper[4858]: I0218 00:58:10.537770 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.072666 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-762ch" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.123135 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkpwv\" (UniqueName: \"kubernetes.io/projected/15f5690b-3488-41ab-ba71-6aaf7f6b6bbf-kube-api-access-dkpwv\") pod \"15f5690b-3488-41ab-ba71-6aaf7f6b6bbf\" (UID: \"15f5690b-3488-41ab-ba71-6aaf7f6b6bbf\") " Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.123206 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15f5690b-3488-41ab-ba71-6aaf7f6b6bbf-ssh-key-openstack-edpm-ipam\") pod \"15f5690b-3488-41ab-ba71-6aaf7f6b6bbf\" (UID: \"15f5690b-3488-41ab-ba71-6aaf7f6b6bbf\") " Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.123252 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15f5690b-3488-41ab-ba71-6aaf7f6b6bbf-inventory\") pod \"15f5690b-3488-41ab-ba71-6aaf7f6b6bbf\" (UID: \"15f5690b-3488-41ab-ba71-6aaf7f6b6bbf\") " Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.131881 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15f5690b-3488-41ab-ba71-6aaf7f6b6bbf-kube-api-access-dkpwv" (OuterVolumeSpecName: "kube-api-access-dkpwv") pod "15f5690b-3488-41ab-ba71-6aaf7f6b6bbf" (UID: "15f5690b-3488-41ab-ba71-6aaf7f6b6bbf"). InnerVolumeSpecName "kube-api-access-dkpwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.158720 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15f5690b-3488-41ab-ba71-6aaf7f6b6bbf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "15f5690b-3488-41ab-ba71-6aaf7f6b6bbf" (UID: "15f5690b-3488-41ab-ba71-6aaf7f6b6bbf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.167064 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15f5690b-3488-41ab-ba71-6aaf7f6b6bbf-inventory" (OuterVolumeSpecName: "inventory") pod "15f5690b-3488-41ab-ba71-6aaf7f6b6bbf" (UID: "15f5690b-3488-41ab-ba71-6aaf7f6b6bbf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.227199 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15f5690b-3488-41ab-ba71-6aaf7f6b6bbf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.227235 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15f5690b-3488-41ab-ba71-6aaf7f6b6bbf-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.227250 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkpwv\" (UniqueName: \"kubernetes.io/projected/15f5690b-3488-41ab-ba71-6aaf7f6b6bbf-kube-api-access-dkpwv\") on node \"crc\" DevicePath \"\"" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.485540 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-762ch" event={"ID":"15f5690b-3488-41ab-ba71-6aaf7f6b6bbf","Type":"ContainerDied","Data":"852080f68578be4d95bf4d589f18857e66cc889c629d8f5e0b19da5402c8544f"} Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.485595 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="852080f68578be4d95bf4d589f18857e66cc889c629d8f5e0b19da5402c8544f" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.485600 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-762ch" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.564397 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf"] Feb 18 00:58:11 crc kubenswrapper[4858]: E0218 00:58:11.564966 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15f5690b-3488-41ab-ba71-6aaf7f6b6bbf" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.564983 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="15f5690b-3488-41ab-ba71-6aaf7f6b6bbf" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.565337 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="15f5690b-3488-41ab-ba71-6aaf7f6b6bbf" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.566391 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.571990 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.572199 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ts27" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.574456 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.575247 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.575946 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf"] Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.739636 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b6904c5-bb8c-4534-a12c-723f228bcf32-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf\" (UID: \"2b6904c5-bb8c-4534-a12c-723f228bcf32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.739945 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b6904c5-bb8c-4534-a12c-723f228bcf32-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf\" (UID: \"2b6904c5-bb8c-4534-a12c-723f228bcf32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.740113 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b6904c5-bb8c-4534-a12c-723f228bcf32-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf\" (UID: \"2b6904c5-bb8c-4534-a12c-723f228bcf32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.740300 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhr4f\" (UniqueName: \"kubernetes.io/projected/2b6904c5-bb8c-4534-a12c-723f228bcf32-kube-api-access-xhr4f\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf\" (UID: \"2b6904c5-bb8c-4534-a12c-723f228bcf32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.843449 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b6904c5-bb8c-4534-a12c-723f228bcf32-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf\" (UID: \"2b6904c5-bb8c-4534-a12c-723f228bcf32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.844075 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b6904c5-bb8c-4534-a12c-723f228bcf32-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf\" (UID: \"2b6904c5-bb8c-4534-a12c-723f228bcf32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.844148 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b6904c5-bb8c-4534-a12c-723f228bcf32-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf\" (UID: \"2b6904c5-bb8c-4534-a12c-723f228bcf32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.844248 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhr4f\" (UniqueName: \"kubernetes.io/projected/2b6904c5-bb8c-4534-a12c-723f228bcf32-kube-api-access-xhr4f\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf\" (UID: \"2b6904c5-bb8c-4534-a12c-723f228bcf32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.852856 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b6904c5-bb8c-4534-a12c-723f228bcf32-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf\" (UID: \"2b6904c5-bb8c-4534-a12c-723f228bcf32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.855672 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b6904c5-bb8c-4534-a12c-723f228bcf32-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf\" (UID: \"2b6904c5-bb8c-4534-a12c-723f228bcf32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.856279 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b6904c5-bb8c-4534-a12c-723f228bcf32-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf\" (UID: \"2b6904c5-bb8c-4534-a12c-723f228bcf32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.864637 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhr4f\" (UniqueName: \"kubernetes.io/projected/2b6904c5-bb8c-4534-a12c-723f228bcf32-kube-api-access-xhr4f\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf\" (UID: \"2b6904c5-bb8c-4534-a12c-723f228bcf32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf" Feb 18 00:58:11 crc kubenswrapper[4858]: I0218 00:58:11.904681 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf" Feb 18 00:58:12 crc kubenswrapper[4858]: I0218 00:58:12.498445 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf"] Feb 18 00:58:13 crc kubenswrapper[4858]: E0218 00:58:13.423265 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 00:58:13 crc kubenswrapper[4858]: I0218 00:58:13.507071 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf" event={"ID":"2b6904c5-bb8c-4534-a12c-723f228bcf32","Type":"ContainerStarted","Data":"1ce5e38faba737ce5f50e273a22cd22a87a9f3ecdb5650ca245c126197fbe82f"} Feb 18 00:58:13 crc kubenswrapper[4858]: I0218 00:58:13.507138 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf" event={"ID":"2b6904c5-bb8c-4534-a12c-723f228bcf32","Type":"ContainerStarted","Data":"04ffb4a3cfe560364e1490ce68b03d5e9c9b028a06499627e0e5eb0546f3a815"} Feb 18 00:58:13 crc kubenswrapper[4858]: I0218 00:58:13.527917 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf" podStartSLOduration=2.128834869 podStartE2EDuration="2.527896171s" podCreationTimestamp="2026-02-18 00:58:11 +0000 UTC" firstStartedPulling="2026-02-18 00:58:12.51034158 +0000 UTC m=+1445.816178302" lastFinishedPulling="2026-02-18 00:58:12.909402862 +0000 UTC m=+1446.215239604" observedRunningTime="2026-02-18 00:58:13.525241177 +0000 UTC m=+1446.831077929" watchObservedRunningTime="2026-02-18 00:58:13.527896171 +0000 UTC m=+1446.833732903" Feb 18 00:58:24 crc kubenswrapper[4858]: E0218 00:58:24.511165 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 00:58:24 crc kubenswrapper[4858]: E0218 00:58:24.511748 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 00:58:24 crc kubenswrapper[4858]: E0218 00:58:24.511924 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t22t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-h2mps_openstack(8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 00:58:24 crc kubenswrapper[4858]: E0218 00:58:24.513867 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 00:58:28 crc kubenswrapper[4858]: I0218 00:58:28.436523 4858 scope.go:117] "RemoveContainer" containerID="460b60ca9ee5fc36532dc071dd525d56c92370185a619a80f8fa46d461065709" Feb 18 00:58:28 crc kubenswrapper[4858]: I0218 00:58:28.491584 4858 scope.go:117] "RemoveContainer" containerID="9ec0343b228ed3380bff6d29318d38d5afee51a72eb3aa222f2d7954b2bcdde0" Feb 18 00:58:28 crc kubenswrapper[4858]: E0218 00:58:28.555530 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 00:58:28 crc kubenswrapper[4858]: E0218 00:58:28.555588 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 00:58:28 crc kubenswrapper[4858]: E0218 00:58:28.555718 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n75h65dh56ch8chbh597h687h696h57bhdchcbhb6h9h648hb6h686h666h57ch557h55ch68ch76h686h5f7hb7hc6h68hdh67fh8bhd7hbcq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6qb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(1b28954c-8d35-4f43-a44b-307a56f6fff5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 00:58:28 crc kubenswrapper[4858]: E0218 00:58:28.557309 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 00:58:37 crc kubenswrapper[4858]: E0218 00:58:37.434427 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 00:58:42 crc kubenswrapper[4858]: E0218 00:58:42.422325 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 00:58:52 crc kubenswrapper[4858]: E0218 00:58:52.421304 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 00:58:55 crc kubenswrapper[4858]: I0218 00:58:55.265610 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:58:55 crc kubenswrapper[4858]: I0218 00:58:55.266018 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:58:55 crc kubenswrapper[4858]: E0218 00:58:55.421337 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 00:59:03 crc kubenswrapper[4858]: E0218 00:59:03.424051 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 00:59:08 crc kubenswrapper[4858]: E0218 00:59:08.422701 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 00:59:17 crc kubenswrapper[4858]: E0218 00:59:17.428405 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 00:59:22 crc kubenswrapper[4858]: E0218 00:59:22.423939 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 00:59:25 crc kubenswrapper[4858]: I0218 00:59:25.265040 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:59:25 crc kubenswrapper[4858]: I0218 00:59:25.265409 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:59:28 crc kubenswrapper[4858]: E0218 00:59:28.422531 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 00:59:28 crc kubenswrapper[4858]: I0218 00:59:28.628017 4858 scope.go:117] "RemoveContainer" containerID="c851f21e318745b4ac65d3a4f6e6f96ca9397e32fe50f27eb6b24cd796128fb6" Feb 18 00:59:37 crc kubenswrapper[4858]: E0218 00:59:37.437685 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 00:59:40 crc kubenswrapper[4858]: E0218 00:59:40.424312 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 00:59:48 crc kubenswrapper[4858]: E0218 00:59:48.422536 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 00:59:55 crc kubenswrapper[4858]: I0218 00:59:55.265205 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:59:55 crc kubenswrapper[4858]: I0218 00:59:55.265904 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:59:55 crc kubenswrapper[4858]: I0218 00:59:55.265964 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 00:59:55 crc kubenswrapper[4858]: I0218 00:59:55.266766 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab"} pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:59:55 crc kubenswrapper[4858]: I0218 00:59:55.266851 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" containerID="cri-o://7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" gracePeriod=600 Feb 18 00:59:55 crc kubenswrapper[4858]: E0218 00:59:55.391928 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 00:59:55 crc kubenswrapper[4858]: I0218 00:59:55.423921 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 00:59:55 crc kubenswrapper[4858]: E0218 00:59:55.550865 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 00:59:55 crc kubenswrapper[4858]: E0218 00:59:55.550951 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 00:59:55 crc kubenswrapper[4858]: E0218 00:59:55.551160 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t22t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-h2mps_openstack(8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 00:59:55 crc kubenswrapper[4858]: E0218 00:59:55.552379 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 00:59:55 crc kubenswrapper[4858]: I0218 00:59:55.769896 4858 generic.go:334] "Generic (PLEG): container finished" podID="7172df49-6116-4968-a2b5-a1afb116568b" containerID="7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" exitCode=0 Feb 18 00:59:55 crc kubenswrapper[4858]: I0218 00:59:55.769978 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerDied","Data":"7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab"} Feb 18 00:59:55 crc kubenswrapper[4858]: I0218 00:59:55.770080 4858 scope.go:117] "RemoveContainer" containerID="3b23372d3ce28e5b83a0ba7f985899a4c07e0af615e7bbb60af93be8938ae620" Feb 18 00:59:55 crc kubenswrapper[4858]: I0218 00:59:55.771596 4858 scope.go:117] "RemoveContainer" containerID="7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" Feb 18 00:59:55 crc kubenswrapper[4858]: E0218 00:59:55.772216 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:00:00 crc kubenswrapper[4858]: I0218 01:00:00.169991 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522940-ch6tl"] Feb 18 01:00:00 crc kubenswrapper[4858]: I0218 01:00:00.171933 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-ch6tl" Feb 18 01:00:00 crc kubenswrapper[4858]: I0218 01:00:00.175022 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 01:00:00 crc kubenswrapper[4858]: I0218 01:00:00.181148 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 01:00:00 crc kubenswrapper[4858]: I0218 01:00:00.189953 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522940-ch6tl"] Feb 18 01:00:00 crc kubenswrapper[4858]: I0218 01:00:00.290930 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cd2dd73-4a3b-4264-bdac-060f1c49c9e6-config-volume\") pod \"collect-profiles-29522940-ch6tl\" (UID: \"7cd2dd73-4a3b-4264-bdac-060f1c49c9e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-ch6tl" Feb 18 01:00:00 crc kubenswrapper[4858]: I0218 01:00:00.291169 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cd2dd73-4a3b-4264-bdac-060f1c49c9e6-secret-volume\") pod \"collect-profiles-29522940-ch6tl\" (UID: \"7cd2dd73-4a3b-4264-bdac-060f1c49c9e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-ch6tl" Feb 18 01:00:00 crc kubenswrapper[4858]: I0218 01:00:00.291285 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jfwj\" (UniqueName: \"kubernetes.io/projected/7cd2dd73-4a3b-4264-bdac-060f1c49c9e6-kube-api-access-2jfwj\") pod \"collect-profiles-29522940-ch6tl\" (UID: \"7cd2dd73-4a3b-4264-bdac-060f1c49c9e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-ch6tl" Feb 18 01:00:00 crc kubenswrapper[4858]: I0218 01:00:00.393186 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cd2dd73-4a3b-4264-bdac-060f1c49c9e6-secret-volume\") pod \"collect-profiles-29522940-ch6tl\" (UID: \"7cd2dd73-4a3b-4264-bdac-060f1c49c9e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-ch6tl" Feb 18 01:00:00 crc kubenswrapper[4858]: I0218 01:00:00.393365 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jfwj\" (UniqueName: \"kubernetes.io/projected/7cd2dd73-4a3b-4264-bdac-060f1c49c9e6-kube-api-access-2jfwj\") pod \"collect-profiles-29522940-ch6tl\" (UID: \"7cd2dd73-4a3b-4264-bdac-060f1c49c9e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-ch6tl" Feb 18 01:00:00 crc kubenswrapper[4858]: I0218 01:00:00.393470 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cd2dd73-4a3b-4264-bdac-060f1c49c9e6-config-volume\") pod \"collect-profiles-29522940-ch6tl\" (UID: \"7cd2dd73-4a3b-4264-bdac-060f1c49c9e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-ch6tl" Feb 18 01:00:00 crc kubenswrapper[4858]: I0218 01:00:00.394604 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cd2dd73-4a3b-4264-bdac-060f1c49c9e6-config-volume\") pod \"collect-profiles-29522940-ch6tl\" (UID: \"7cd2dd73-4a3b-4264-bdac-060f1c49c9e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-ch6tl" Feb 18 01:00:00 crc kubenswrapper[4858]: I0218 01:00:00.402257 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cd2dd73-4a3b-4264-bdac-060f1c49c9e6-secret-volume\") pod \"collect-profiles-29522940-ch6tl\" (UID: \"7cd2dd73-4a3b-4264-bdac-060f1c49c9e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-ch6tl" Feb 18 01:00:00 crc kubenswrapper[4858]: I0218 01:00:00.425411 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jfwj\" (UniqueName: \"kubernetes.io/projected/7cd2dd73-4a3b-4264-bdac-060f1c49c9e6-kube-api-access-2jfwj\") pod \"collect-profiles-29522940-ch6tl\" (UID: \"7cd2dd73-4a3b-4264-bdac-060f1c49c9e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-ch6tl" Feb 18 01:00:00 crc kubenswrapper[4858]: I0218 01:00:00.509585 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-ch6tl" Feb 18 01:00:01 crc kubenswrapper[4858]: I0218 01:00:01.018936 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522940-ch6tl"] Feb 18 01:00:01 crc kubenswrapper[4858]: I0218 01:00:01.843678 4858 generic.go:334] "Generic (PLEG): container finished" podID="7cd2dd73-4a3b-4264-bdac-060f1c49c9e6" containerID="5162e818e0bf9da62026d90d61740137d5447060b453d78eb1d715836d1db42d" exitCode=0 Feb 18 01:00:01 crc kubenswrapper[4858]: I0218 01:00:01.843729 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-ch6tl" event={"ID":"7cd2dd73-4a3b-4264-bdac-060f1c49c9e6","Type":"ContainerDied","Data":"5162e818e0bf9da62026d90d61740137d5447060b453d78eb1d715836d1db42d"} Feb 18 01:00:01 crc kubenswrapper[4858]: I0218 01:00:01.844010 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-ch6tl" event={"ID":"7cd2dd73-4a3b-4264-bdac-060f1c49c9e6","Type":"ContainerStarted","Data":"aac365cd2781ad24a5f2a4648ab6f23b957b69f1369bb824083c90811765d7c6"} Feb 18 01:00:02 crc kubenswrapper[4858]: E0218 01:00:02.547076 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:00:02 crc kubenswrapper[4858]: E0218 01:00:02.547368 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:00:02 crc kubenswrapper[4858]: E0218 01:00:02.547524 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n75h65dh56ch8chbh597h687h696h57bhdchcbhb6h9h648hb6h686h666h57ch557h55ch68ch76h686h5f7hb7hc6h68hdh67fh8bhd7hbcq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6qb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(1b28954c-8d35-4f43-a44b-307a56f6fff5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:00:02 crc kubenswrapper[4858]: E0218 01:00:02.548731 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:00:03 crc kubenswrapper[4858]: I0218 01:00:03.323353 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-ch6tl" Feb 18 01:00:03 crc kubenswrapper[4858]: I0218 01:00:03.477161 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jfwj\" (UniqueName: \"kubernetes.io/projected/7cd2dd73-4a3b-4264-bdac-060f1c49c9e6-kube-api-access-2jfwj\") pod \"7cd2dd73-4a3b-4264-bdac-060f1c49c9e6\" (UID: \"7cd2dd73-4a3b-4264-bdac-060f1c49c9e6\") " Feb 18 01:00:03 crc kubenswrapper[4858]: I0218 01:00:03.477428 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cd2dd73-4a3b-4264-bdac-060f1c49c9e6-config-volume\") pod \"7cd2dd73-4a3b-4264-bdac-060f1c49c9e6\" (UID: \"7cd2dd73-4a3b-4264-bdac-060f1c49c9e6\") " Feb 18 01:00:03 crc kubenswrapper[4858]: I0218 01:00:03.477670 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cd2dd73-4a3b-4264-bdac-060f1c49c9e6-secret-volume\") pod \"7cd2dd73-4a3b-4264-bdac-060f1c49c9e6\" (UID: \"7cd2dd73-4a3b-4264-bdac-060f1c49c9e6\") " Feb 18 01:00:03 crc kubenswrapper[4858]: I0218 01:00:03.479595 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cd2dd73-4a3b-4264-bdac-060f1c49c9e6-config-volume" (OuterVolumeSpecName: "config-volume") pod "7cd2dd73-4a3b-4264-bdac-060f1c49c9e6" (UID: "7cd2dd73-4a3b-4264-bdac-060f1c49c9e6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 01:00:03 crc kubenswrapper[4858]: I0218 01:00:03.484001 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd2dd73-4a3b-4264-bdac-060f1c49c9e6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7cd2dd73-4a3b-4264-bdac-060f1c49c9e6" (UID: "7cd2dd73-4a3b-4264-bdac-060f1c49c9e6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:00:03 crc kubenswrapper[4858]: I0218 01:00:03.495679 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cd2dd73-4a3b-4264-bdac-060f1c49c9e6-kube-api-access-2jfwj" (OuterVolumeSpecName: "kube-api-access-2jfwj") pod "7cd2dd73-4a3b-4264-bdac-060f1c49c9e6" (UID: "7cd2dd73-4a3b-4264-bdac-060f1c49c9e6"). InnerVolumeSpecName "kube-api-access-2jfwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:00:03 crc kubenswrapper[4858]: I0218 01:00:03.580523 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cd2dd73-4a3b-4264-bdac-060f1c49c9e6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 01:00:03 crc kubenswrapper[4858]: I0218 01:00:03.580561 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jfwj\" (UniqueName: \"kubernetes.io/projected/7cd2dd73-4a3b-4264-bdac-060f1c49c9e6-kube-api-access-2jfwj\") on node \"crc\" DevicePath \"\"" Feb 18 01:00:03 crc kubenswrapper[4858]: I0218 01:00:03.580572 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cd2dd73-4a3b-4264-bdac-060f1c49c9e6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 01:00:03 crc kubenswrapper[4858]: I0218 01:00:03.866835 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-ch6tl" event={"ID":"7cd2dd73-4a3b-4264-bdac-060f1c49c9e6","Type":"ContainerDied","Data":"aac365cd2781ad24a5f2a4648ab6f23b957b69f1369bb824083c90811765d7c6"} Feb 18 01:00:03 crc kubenswrapper[4858]: I0218 01:00:03.867181 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aac365cd2781ad24a5f2a4648ab6f23b957b69f1369bb824083c90811765d7c6" Feb 18 01:00:03 crc kubenswrapper[4858]: I0218 01:00:03.866928 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-ch6tl" Feb 18 01:00:06 crc kubenswrapper[4858]: E0218 01:00:06.973627 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Feb 18 01:00:08 crc kubenswrapper[4858]: I0218 01:00:08.419296 4858 scope.go:117] "RemoveContainer" containerID="7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" Feb 18 01:00:08 crc kubenswrapper[4858]: E0218 01:00:08.419954 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:00:09 crc kubenswrapper[4858]: E0218 01:00:09.422400 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:00:15 crc kubenswrapper[4858]: E0218 01:00:15.422905 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:00:21 crc kubenswrapper[4858]: E0218 01:00:21.422662 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:00:23 crc kubenswrapper[4858]: I0218 01:00:23.422154 4858 scope.go:117] "RemoveContainer" containerID="7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" Feb 18 01:00:23 crc kubenswrapper[4858]: E0218 01:00:23.423331 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:00:27 crc kubenswrapper[4858]: E0218 01:00:27.440124 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:00:28 crc kubenswrapper[4858]: I0218 01:00:28.720074 4858 scope.go:117] "RemoveContainer" containerID="cd57ef83a6af653dfb5926b9290842f9dfada85b09ef09641fff73292c3f5a89" Feb 18 01:00:34 crc kubenswrapper[4858]: I0218 01:00:34.419632 4858 scope.go:117] "RemoveContainer" containerID="7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" Feb 18 01:00:34 crc kubenswrapper[4858]: E0218 01:00:34.420766 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:00:34 crc kubenswrapper[4858]: E0218 01:00:34.422836 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:00:39 crc kubenswrapper[4858]: E0218 01:00:39.422993 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:00:46 crc kubenswrapper[4858]: I0218 01:00:46.040830 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-78bq8"] Feb 18 01:00:46 crc kubenswrapper[4858]: E0218 01:00:46.041868 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd2dd73-4a3b-4264-bdac-060f1c49c9e6" containerName="collect-profiles" Feb 18 01:00:46 crc kubenswrapper[4858]: I0218 01:00:46.041883 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd2dd73-4a3b-4264-bdac-060f1c49c9e6" containerName="collect-profiles" Feb 18 01:00:46 crc kubenswrapper[4858]: I0218 01:00:46.042162 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cd2dd73-4a3b-4264-bdac-060f1c49c9e6" containerName="collect-profiles" Feb 18 01:00:46 crc kubenswrapper[4858]: I0218 01:00:46.043713 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-78bq8" Feb 18 01:00:46 crc kubenswrapper[4858]: I0218 01:00:46.093738 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-78bq8"] Feb 18 01:00:46 crc kubenswrapper[4858]: I0218 01:00:46.132421 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b67b261-8965-44b3-8b00-2ea9e8313d8d-utilities\") pod \"community-operators-78bq8\" (UID: \"8b67b261-8965-44b3-8b00-2ea9e8313d8d\") " pod="openshift-marketplace/community-operators-78bq8" Feb 18 01:00:46 crc kubenswrapper[4858]: I0218 01:00:46.132482 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27lgp\" (UniqueName: \"kubernetes.io/projected/8b67b261-8965-44b3-8b00-2ea9e8313d8d-kube-api-access-27lgp\") pod \"community-operators-78bq8\" (UID: \"8b67b261-8965-44b3-8b00-2ea9e8313d8d\") " pod="openshift-marketplace/community-operators-78bq8" Feb 18 01:00:46 crc kubenswrapper[4858]: I0218 01:00:46.132539 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b67b261-8965-44b3-8b00-2ea9e8313d8d-catalog-content\") pod \"community-operators-78bq8\" (UID: \"8b67b261-8965-44b3-8b00-2ea9e8313d8d\") " pod="openshift-marketplace/community-operators-78bq8" Feb 18 01:00:46 crc kubenswrapper[4858]: I0218 01:00:46.234706 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b67b261-8965-44b3-8b00-2ea9e8313d8d-utilities\") pod \"community-operators-78bq8\" (UID: \"8b67b261-8965-44b3-8b00-2ea9e8313d8d\") " pod="openshift-marketplace/community-operators-78bq8" Feb 18 01:00:46 crc kubenswrapper[4858]: I0218 01:00:46.234802 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27lgp\" (UniqueName: \"kubernetes.io/projected/8b67b261-8965-44b3-8b00-2ea9e8313d8d-kube-api-access-27lgp\") pod \"community-operators-78bq8\" (UID: \"8b67b261-8965-44b3-8b00-2ea9e8313d8d\") " pod="openshift-marketplace/community-operators-78bq8" Feb 18 01:00:46 crc kubenswrapper[4858]: I0218 01:00:46.234887 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b67b261-8965-44b3-8b00-2ea9e8313d8d-catalog-content\") pod \"community-operators-78bq8\" (UID: \"8b67b261-8965-44b3-8b00-2ea9e8313d8d\") " pod="openshift-marketplace/community-operators-78bq8" Feb 18 01:00:46 crc kubenswrapper[4858]: I0218 01:00:46.235684 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b67b261-8965-44b3-8b00-2ea9e8313d8d-catalog-content\") pod \"community-operators-78bq8\" (UID: \"8b67b261-8965-44b3-8b00-2ea9e8313d8d\") " pod="openshift-marketplace/community-operators-78bq8" Feb 18 01:00:46 crc kubenswrapper[4858]: I0218 01:00:46.235685 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b67b261-8965-44b3-8b00-2ea9e8313d8d-utilities\") pod \"community-operators-78bq8\" (UID: \"8b67b261-8965-44b3-8b00-2ea9e8313d8d\") " pod="openshift-marketplace/community-operators-78bq8" Feb 18 01:00:46 crc kubenswrapper[4858]: I0218 01:00:46.261883 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27lgp\" (UniqueName: \"kubernetes.io/projected/8b67b261-8965-44b3-8b00-2ea9e8313d8d-kube-api-access-27lgp\") pod \"community-operators-78bq8\" (UID: \"8b67b261-8965-44b3-8b00-2ea9e8313d8d\") " pod="openshift-marketplace/community-operators-78bq8" Feb 18 01:00:46 crc kubenswrapper[4858]: I0218 01:00:46.375591 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-78bq8" Feb 18 01:00:46 crc kubenswrapper[4858]: I0218 01:00:46.829347 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-78bq8"] Feb 18 01:00:47 crc kubenswrapper[4858]: I0218 01:00:47.415767 4858 generic.go:334] "Generic (PLEG): container finished" podID="8b67b261-8965-44b3-8b00-2ea9e8313d8d" containerID="f32761ba081c928cef27dced76cce315494a22b78f225913edca101701dcc3ca" exitCode=0 Feb 18 01:00:47 crc kubenswrapper[4858]: I0218 01:00:47.415842 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78bq8" event={"ID":"8b67b261-8965-44b3-8b00-2ea9e8313d8d","Type":"ContainerDied","Data":"f32761ba081c928cef27dced76cce315494a22b78f225913edca101701dcc3ca"} Feb 18 01:00:47 crc kubenswrapper[4858]: I0218 01:00:47.415899 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78bq8" event={"ID":"8b67b261-8965-44b3-8b00-2ea9e8313d8d","Type":"ContainerStarted","Data":"a201382331646b91ae673257461029fcf4759feb4df436c31ec8217e54b1b961"} Feb 18 01:00:48 crc kubenswrapper[4858]: E0218 01:00:48.424343 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:00:49 crc kubenswrapper[4858]: I0218 01:00:49.420118 4858 scope.go:117] "RemoveContainer" containerID="7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" Feb 18 01:00:49 crc kubenswrapper[4858]: E0218 01:00:49.420733 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:00:49 crc kubenswrapper[4858]: I0218 01:00:49.807528 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r6xtq"] Feb 18 01:00:49 crc kubenswrapper[4858]: I0218 01:00:49.809722 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r6xtq" Feb 18 01:00:49 crc kubenswrapper[4858]: I0218 01:00:49.839912 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r6xtq"] Feb 18 01:00:49 crc kubenswrapper[4858]: I0218 01:00:49.921244 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9202969-1b3a-4cc4-8c44-097c54b79cf0-utilities\") pod \"redhat-marketplace-r6xtq\" (UID: \"b9202969-1b3a-4cc4-8c44-097c54b79cf0\") " pod="openshift-marketplace/redhat-marketplace-r6xtq" Feb 18 01:00:49 crc kubenswrapper[4858]: I0218 01:00:49.921327 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd8jh\" (UniqueName: \"kubernetes.io/projected/b9202969-1b3a-4cc4-8c44-097c54b79cf0-kube-api-access-vd8jh\") pod \"redhat-marketplace-r6xtq\" (UID: \"b9202969-1b3a-4cc4-8c44-097c54b79cf0\") " pod="openshift-marketplace/redhat-marketplace-r6xtq" Feb 18 01:00:49 crc kubenswrapper[4858]: I0218 01:00:49.921457 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9202969-1b3a-4cc4-8c44-097c54b79cf0-catalog-content\") pod \"redhat-marketplace-r6xtq\" (UID: \"b9202969-1b3a-4cc4-8c44-097c54b79cf0\") " pod="openshift-marketplace/redhat-marketplace-r6xtq" Feb 18 01:00:50 crc kubenswrapper[4858]: I0218 01:00:50.022942 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd8jh\" (UniqueName: \"kubernetes.io/projected/b9202969-1b3a-4cc4-8c44-097c54b79cf0-kube-api-access-vd8jh\") pod \"redhat-marketplace-r6xtq\" (UID: \"b9202969-1b3a-4cc4-8c44-097c54b79cf0\") " pod="openshift-marketplace/redhat-marketplace-r6xtq" Feb 18 01:00:50 crc kubenswrapper[4858]: I0218 01:00:50.023458 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9202969-1b3a-4cc4-8c44-097c54b79cf0-catalog-content\") pod \"redhat-marketplace-r6xtq\" (UID: \"b9202969-1b3a-4cc4-8c44-097c54b79cf0\") " pod="openshift-marketplace/redhat-marketplace-r6xtq" Feb 18 01:00:50 crc kubenswrapper[4858]: I0218 01:00:50.023639 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9202969-1b3a-4cc4-8c44-097c54b79cf0-utilities\") pod \"redhat-marketplace-r6xtq\" (UID: \"b9202969-1b3a-4cc4-8c44-097c54b79cf0\") " pod="openshift-marketplace/redhat-marketplace-r6xtq" Feb 18 01:00:50 crc kubenswrapper[4858]: I0218 01:00:50.023982 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9202969-1b3a-4cc4-8c44-097c54b79cf0-catalog-content\") pod \"redhat-marketplace-r6xtq\" (UID: \"b9202969-1b3a-4cc4-8c44-097c54b79cf0\") " pod="openshift-marketplace/redhat-marketplace-r6xtq" Feb 18 01:00:50 crc kubenswrapper[4858]: I0218 01:00:50.024011 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9202969-1b3a-4cc4-8c44-097c54b79cf0-utilities\") pod \"redhat-marketplace-r6xtq\" (UID: \"b9202969-1b3a-4cc4-8c44-097c54b79cf0\") " pod="openshift-marketplace/redhat-marketplace-r6xtq" Feb 18 01:00:50 crc kubenswrapper[4858]: I0218 01:00:50.045673 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd8jh\" (UniqueName: \"kubernetes.io/projected/b9202969-1b3a-4cc4-8c44-097c54b79cf0-kube-api-access-vd8jh\") pod \"redhat-marketplace-r6xtq\" (UID: \"b9202969-1b3a-4cc4-8c44-097c54b79cf0\") " pod="openshift-marketplace/redhat-marketplace-r6xtq" Feb 18 01:00:50 crc kubenswrapper[4858]: I0218 01:00:50.138997 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r6xtq" Feb 18 01:00:50 crc kubenswrapper[4858]: I0218 01:00:50.736689 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r6xtq"] Feb 18 01:00:50 crc kubenswrapper[4858]: W0218 01:00:50.738750 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9202969_1b3a_4cc4_8c44_097c54b79cf0.slice/crio-d5699db470380344df77c66bc45981984cf1963def00d2f360684c3358a77956 WatchSource:0}: Error finding container d5699db470380344df77c66bc45981984cf1963def00d2f360684c3358a77956: Status 404 returned error can't find the container with id d5699db470380344df77c66bc45981984cf1963def00d2f360684c3358a77956 Feb 18 01:00:51 crc kubenswrapper[4858]: I0218 01:00:51.454286 4858 generic.go:334] "Generic (PLEG): container finished" podID="b9202969-1b3a-4cc4-8c44-097c54b79cf0" containerID="914dfdb582acc02bc9802dbd973c509cf956bbf8dc130bad48d23698b9f04648" exitCode=0 Feb 18 01:00:51 crc kubenswrapper[4858]: I0218 01:00:51.454336 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6xtq" event={"ID":"b9202969-1b3a-4cc4-8c44-097c54b79cf0","Type":"ContainerDied","Data":"914dfdb582acc02bc9802dbd973c509cf956bbf8dc130bad48d23698b9f04648"} Feb 18 01:00:51 crc kubenswrapper[4858]: I0218 01:00:51.454625 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6xtq" event={"ID":"b9202969-1b3a-4cc4-8c44-097c54b79cf0","Type":"ContainerStarted","Data":"d5699db470380344df77c66bc45981984cf1963def00d2f360684c3358a77956"} Feb 18 01:00:52 crc kubenswrapper[4858]: E0218 01:00:52.421440 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:01:00 crc kubenswrapper[4858]: I0218 01:01:00.173425 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29522941-8t6vf"] Feb 18 01:01:00 crc kubenswrapper[4858]: I0218 01:01:00.175588 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522941-8t6vf" Feb 18 01:01:00 crc kubenswrapper[4858]: I0218 01:01:00.193800 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522941-8t6vf"] Feb 18 01:01:00 crc kubenswrapper[4858]: I0218 01:01:00.248588 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpdfk\" (UniqueName: \"kubernetes.io/projected/2201a764-9f38-4708-b9ef-14515082aae5-kube-api-access-mpdfk\") pod \"keystone-cron-29522941-8t6vf\" (UID: \"2201a764-9f38-4708-b9ef-14515082aae5\") " pod="openstack/keystone-cron-29522941-8t6vf" Feb 18 01:01:00 crc kubenswrapper[4858]: I0218 01:01:00.248944 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2201a764-9f38-4708-b9ef-14515082aae5-fernet-keys\") pod \"keystone-cron-29522941-8t6vf\" (UID: \"2201a764-9f38-4708-b9ef-14515082aae5\") " pod="openstack/keystone-cron-29522941-8t6vf" Feb 18 01:01:00 crc kubenswrapper[4858]: I0218 01:01:00.249325 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2201a764-9f38-4708-b9ef-14515082aae5-config-data\") pod \"keystone-cron-29522941-8t6vf\" (UID: \"2201a764-9f38-4708-b9ef-14515082aae5\") " pod="openstack/keystone-cron-29522941-8t6vf" Feb 18 01:01:00 crc kubenswrapper[4858]: I0218 01:01:00.249383 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2201a764-9f38-4708-b9ef-14515082aae5-combined-ca-bundle\") pod \"keystone-cron-29522941-8t6vf\" (UID: \"2201a764-9f38-4708-b9ef-14515082aae5\") " pod="openstack/keystone-cron-29522941-8t6vf" Feb 18 01:01:00 crc kubenswrapper[4858]: I0218 01:01:00.351415 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpdfk\" (UniqueName: \"kubernetes.io/projected/2201a764-9f38-4708-b9ef-14515082aae5-kube-api-access-mpdfk\") pod \"keystone-cron-29522941-8t6vf\" (UID: \"2201a764-9f38-4708-b9ef-14515082aae5\") " pod="openstack/keystone-cron-29522941-8t6vf" Feb 18 01:01:00 crc kubenswrapper[4858]: I0218 01:01:00.351882 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2201a764-9f38-4708-b9ef-14515082aae5-fernet-keys\") pod \"keystone-cron-29522941-8t6vf\" (UID: \"2201a764-9f38-4708-b9ef-14515082aae5\") " pod="openstack/keystone-cron-29522941-8t6vf" Feb 18 01:01:00 crc kubenswrapper[4858]: I0218 01:01:00.353367 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2201a764-9f38-4708-b9ef-14515082aae5-config-data\") pod \"keystone-cron-29522941-8t6vf\" (UID: \"2201a764-9f38-4708-b9ef-14515082aae5\") " pod="openstack/keystone-cron-29522941-8t6vf" Feb 18 01:01:00 crc kubenswrapper[4858]: I0218 01:01:00.354050 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2201a764-9f38-4708-b9ef-14515082aae5-combined-ca-bundle\") pod \"keystone-cron-29522941-8t6vf\" (UID: \"2201a764-9f38-4708-b9ef-14515082aae5\") " pod="openstack/keystone-cron-29522941-8t6vf" Feb 18 01:01:00 crc kubenswrapper[4858]: I0218 01:01:00.359596 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2201a764-9f38-4708-b9ef-14515082aae5-fernet-keys\") pod \"keystone-cron-29522941-8t6vf\" (UID: \"2201a764-9f38-4708-b9ef-14515082aae5\") " pod="openstack/keystone-cron-29522941-8t6vf" Feb 18 01:01:00 crc kubenswrapper[4858]: I0218 01:01:00.360035 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2201a764-9f38-4708-b9ef-14515082aae5-config-data\") pod \"keystone-cron-29522941-8t6vf\" (UID: \"2201a764-9f38-4708-b9ef-14515082aae5\") " pod="openstack/keystone-cron-29522941-8t6vf" Feb 18 01:01:00 crc kubenswrapper[4858]: I0218 01:01:00.360451 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2201a764-9f38-4708-b9ef-14515082aae5-combined-ca-bundle\") pod \"keystone-cron-29522941-8t6vf\" (UID: \"2201a764-9f38-4708-b9ef-14515082aae5\") " pod="openstack/keystone-cron-29522941-8t6vf" Feb 18 01:01:00 crc kubenswrapper[4858]: I0218 01:01:00.369196 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpdfk\" (UniqueName: \"kubernetes.io/projected/2201a764-9f38-4708-b9ef-14515082aae5-kube-api-access-mpdfk\") pod \"keystone-cron-29522941-8t6vf\" (UID: \"2201a764-9f38-4708-b9ef-14515082aae5\") " pod="openstack/keystone-cron-29522941-8t6vf" Feb 18 01:01:00 crc kubenswrapper[4858]: E0218 01:01:00.423225 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:01:00 crc kubenswrapper[4858]: I0218 01:01:00.518650 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522941-8t6vf" Feb 18 01:01:01 crc kubenswrapper[4858]: I0218 01:01:01.088900 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522941-8t6vf"] Feb 18 01:01:01 crc kubenswrapper[4858]: I0218 01:01:01.581564 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522941-8t6vf" event={"ID":"2201a764-9f38-4708-b9ef-14515082aae5","Type":"ContainerStarted","Data":"c90699c111c26d5409303ed203dd2eb8344dc32cbe51db55dbeb9e4e74b0c6e5"} Feb 18 01:01:01 crc kubenswrapper[4858]: I0218 01:01:01.582701 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522941-8t6vf" event={"ID":"2201a764-9f38-4708-b9ef-14515082aae5","Type":"ContainerStarted","Data":"e26c59e4c060967aa3e6c1da294324977506329c935d9ed4ad61736d57f61cf7"} Feb 18 01:01:01 crc kubenswrapper[4858]: I0218 01:01:01.601685 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29522941-8t6vf" podStartSLOduration=1.601664904 podStartE2EDuration="1.601664904s" podCreationTimestamp="2026-02-18 01:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 01:01:01.597171025 +0000 UTC m=+1614.903007777" watchObservedRunningTime="2026-02-18 01:01:01.601664904 +0000 UTC m=+1614.907501656" Feb 18 01:01:03 crc kubenswrapper[4858]: I0218 01:01:03.604634 4858 generic.go:334] "Generic (PLEG): container finished" podID="2201a764-9f38-4708-b9ef-14515082aae5" containerID="c90699c111c26d5409303ed203dd2eb8344dc32cbe51db55dbeb9e4e74b0c6e5" exitCode=0 Feb 18 01:01:03 crc kubenswrapper[4858]: I0218 01:01:03.605056 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522941-8t6vf" event={"ID":"2201a764-9f38-4708-b9ef-14515082aae5","Type":"ContainerDied","Data":"c90699c111c26d5409303ed203dd2eb8344dc32cbe51db55dbeb9e4e74b0c6e5"} Feb 18 01:01:04 crc kubenswrapper[4858]: I0218 01:01:04.420147 4858 scope.go:117] "RemoveContainer" containerID="7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" Feb 18 01:01:04 crc kubenswrapper[4858]: E0218 01:01:04.420696 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:01:05 crc kubenswrapper[4858]: I0218 01:01:05.094529 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522941-8t6vf" Feb 18 01:01:05 crc kubenswrapper[4858]: I0218 01:01:05.172112 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2201a764-9f38-4708-b9ef-14515082aae5-fernet-keys\") pod \"2201a764-9f38-4708-b9ef-14515082aae5\" (UID: \"2201a764-9f38-4708-b9ef-14515082aae5\") " Feb 18 01:01:05 crc kubenswrapper[4858]: I0218 01:01:05.172244 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2201a764-9f38-4708-b9ef-14515082aae5-combined-ca-bundle\") pod \"2201a764-9f38-4708-b9ef-14515082aae5\" (UID: \"2201a764-9f38-4708-b9ef-14515082aae5\") " Feb 18 01:01:05 crc kubenswrapper[4858]: I0218 01:01:05.172352 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2201a764-9f38-4708-b9ef-14515082aae5-config-data\") pod \"2201a764-9f38-4708-b9ef-14515082aae5\" (UID: \"2201a764-9f38-4708-b9ef-14515082aae5\") " Feb 18 01:01:05 crc kubenswrapper[4858]: I0218 01:01:05.172421 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpdfk\" (UniqueName: \"kubernetes.io/projected/2201a764-9f38-4708-b9ef-14515082aae5-kube-api-access-mpdfk\") pod \"2201a764-9f38-4708-b9ef-14515082aae5\" (UID: \"2201a764-9f38-4708-b9ef-14515082aae5\") " Feb 18 01:01:05 crc kubenswrapper[4858]: I0218 01:01:05.186312 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2201a764-9f38-4708-b9ef-14515082aae5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2201a764-9f38-4708-b9ef-14515082aae5" (UID: "2201a764-9f38-4708-b9ef-14515082aae5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:01:05 crc kubenswrapper[4858]: I0218 01:01:05.194057 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2201a764-9f38-4708-b9ef-14515082aae5-kube-api-access-mpdfk" (OuterVolumeSpecName: "kube-api-access-mpdfk") pod "2201a764-9f38-4708-b9ef-14515082aae5" (UID: "2201a764-9f38-4708-b9ef-14515082aae5"). InnerVolumeSpecName "kube-api-access-mpdfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:01:05 crc kubenswrapper[4858]: I0218 01:01:05.204802 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2201a764-9f38-4708-b9ef-14515082aae5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2201a764-9f38-4708-b9ef-14515082aae5" (UID: "2201a764-9f38-4708-b9ef-14515082aae5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:01:05 crc kubenswrapper[4858]: I0218 01:01:05.232807 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2201a764-9f38-4708-b9ef-14515082aae5-config-data" (OuterVolumeSpecName: "config-data") pod "2201a764-9f38-4708-b9ef-14515082aae5" (UID: "2201a764-9f38-4708-b9ef-14515082aae5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:01:05 crc kubenswrapper[4858]: I0218 01:01:05.274207 4858 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2201a764-9f38-4708-b9ef-14515082aae5-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 01:01:05 crc kubenswrapper[4858]: I0218 01:01:05.274444 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2201a764-9f38-4708-b9ef-14515082aae5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 01:01:05 crc kubenswrapper[4858]: I0218 01:01:05.274541 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2201a764-9f38-4708-b9ef-14515082aae5-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 01:01:05 crc kubenswrapper[4858]: I0218 01:01:05.274645 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpdfk\" (UniqueName: \"kubernetes.io/projected/2201a764-9f38-4708-b9ef-14515082aae5-kube-api-access-mpdfk\") on node \"crc\" DevicePath \"\"" Feb 18 01:01:05 crc kubenswrapper[4858]: I0218 01:01:05.632645 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522941-8t6vf" event={"ID":"2201a764-9f38-4708-b9ef-14515082aae5","Type":"ContainerDied","Data":"e26c59e4c060967aa3e6c1da294324977506329c935d9ed4ad61736d57f61cf7"} Feb 18 01:01:05 crc kubenswrapper[4858]: I0218 01:01:05.632715 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e26c59e4c060967aa3e6c1da294324977506329c935d9ed4ad61736d57f61cf7" Feb 18 01:01:05 crc kubenswrapper[4858]: I0218 01:01:05.632738 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522941-8t6vf" Feb 18 01:01:07 crc kubenswrapper[4858]: E0218 01:01:07.430411 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:01:09 crc kubenswrapper[4858]: I0218 01:01:09.694183 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78bq8" event={"ID":"8b67b261-8965-44b3-8b00-2ea9e8313d8d","Type":"ContainerStarted","Data":"1581693edcf548cb8684fa0ec0bd4aa933d905c24b3c3dfb7623105daf961c84"} Feb 18 01:01:10 crc kubenswrapper[4858]: I0218 01:01:10.707794 4858 generic.go:334] "Generic (PLEG): container finished" podID="8b67b261-8965-44b3-8b00-2ea9e8313d8d" containerID="1581693edcf548cb8684fa0ec0bd4aa933d905c24b3c3dfb7623105daf961c84" exitCode=0 Feb 18 01:01:10 crc kubenswrapper[4858]: I0218 01:01:10.707860 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78bq8" event={"ID":"8b67b261-8965-44b3-8b00-2ea9e8313d8d","Type":"ContainerDied","Data":"1581693edcf548cb8684fa0ec0bd4aa933d905c24b3c3dfb7623105daf961c84"} Feb 18 01:01:11 crc kubenswrapper[4858]: I0218 01:01:11.725118 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6xtq" event={"ID":"b9202969-1b3a-4cc4-8c44-097c54b79cf0","Type":"ContainerStarted","Data":"3675f1af40d11e92ffce2a4a3cfc1a8cb5e428b1c119e8a0e5f026f39af4c051"} Feb 18 01:01:11 crc kubenswrapper[4858]: I0218 01:01:11.729782 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78bq8" event={"ID":"8b67b261-8965-44b3-8b00-2ea9e8313d8d","Type":"ContainerStarted","Data":"251f3414aaff968267023e4107674dbc85df1fbb4ea585d5d31d2dfea00e353b"} Feb 18 01:01:11 crc kubenswrapper[4858]: I0218 01:01:11.780170 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-78bq8" podStartSLOduration=2.123074247 podStartE2EDuration="25.780142997s" podCreationTimestamp="2026-02-18 01:00:46 +0000 UTC" firstStartedPulling="2026-02-18 01:00:47.449486589 +0000 UTC m=+1600.755323341" lastFinishedPulling="2026-02-18 01:01:11.106555359 +0000 UTC m=+1624.412392091" observedRunningTime="2026-02-18 01:01:11.769427218 +0000 UTC m=+1625.075263980" watchObservedRunningTime="2026-02-18 01:01:11.780142997 +0000 UTC m=+1625.085979739" Feb 18 01:01:12 crc kubenswrapper[4858]: I0218 01:01:12.744346 4858 generic.go:334] "Generic (PLEG): container finished" podID="b9202969-1b3a-4cc4-8c44-097c54b79cf0" containerID="3675f1af40d11e92ffce2a4a3cfc1a8cb5e428b1c119e8a0e5f026f39af4c051" exitCode=0 Feb 18 01:01:12 crc kubenswrapper[4858]: I0218 01:01:12.744434 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6xtq" event={"ID":"b9202969-1b3a-4cc4-8c44-097c54b79cf0","Type":"ContainerDied","Data":"3675f1af40d11e92ffce2a4a3cfc1a8cb5e428b1c119e8a0e5f026f39af4c051"} Feb 18 01:01:12 crc kubenswrapper[4858]: I0218 01:01:12.748627 4858 generic.go:334] "Generic (PLEG): container finished" podID="2b6904c5-bb8c-4534-a12c-723f228bcf32" containerID="1ce5e38faba737ce5f50e273a22cd22a87a9f3ecdb5650ca245c126197fbe82f" exitCode=0 Feb 18 01:01:12 crc kubenswrapper[4858]: I0218 01:01:12.748731 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf" event={"ID":"2b6904c5-bb8c-4534-a12c-723f228bcf32","Type":"ContainerDied","Data":"1ce5e38faba737ce5f50e273a22cd22a87a9f3ecdb5650ca245c126197fbe82f"} Feb 18 01:01:13 crc kubenswrapper[4858]: I0218 01:01:13.761171 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6xtq" event={"ID":"b9202969-1b3a-4cc4-8c44-097c54b79cf0","Type":"ContainerStarted","Data":"c0a4ad85428ef7275269488fb893f9f571aefb52e5bf57ce58bf5e84c165ceed"} Feb 18 01:01:13 crc kubenswrapper[4858]: I0218 01:01:13.793081 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r6xtq" podStartSLOduration=3.047247487 podStartE2EDuration="24.793057811s" podCreationTimestamp="2026-02-18 01:00:49 +0000 UTC" firstStartedPulling="2026-02-18 01:00:51.456414024 +0000 UTC m=+1604.762250766" lastFinishedPulling="2026-02-18 01:01:13.202224358 +0000 UTC m=+1626.508061090" observedRunningTime="2026-02-18 01:01:13.784483073 +0000 UTC m=+1627.090319825" watchObservedRunningTime="2026-02-18 01:01:13.793057811 +0000 UTC m=+1627.098894543" Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.340634 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf" Feb 18 01:01:14 crc kubenswrapper[4858]: E0218 01:01:14.420935 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.477973 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b6904c5-bb8c-4534-a12c-723f228bcf32-bootstrap-combined-ca-bundle\") pod \"2b6904c5-bb8c-4534-a12c-723f228bcf32\" (UID: \"2b6904c5-bb8c-4534-a12c-723f228bcf32\") " Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.478124 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b6904c5-bb8c-4534-a12c-723f228bcf32-inventory\") pod \"2b6904c5-bb8c-4534-a12c-723f228bcf32\" (UID: \"2b6904c5-bb8c-4534-a12c-723f228bcf32\") " Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.478166 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhr4f\" (UniqueName: \"kubernetes.io/projected/2b6904c5-bb8c-4534-a12c-723f228bcf32-kube-api-access-xhr4f\") pod \"2b6904c5-bb8c-4534-a12c-723f228bcf32\" (UID: \"2b6904c5-bb8c-4534-a12c-723f228bcf32\") " Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.478999 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b6904c5-bb8c-4534-a12c-723f228bcf32-ssh-key-openstack-edpm-ipam\") pod \"2b6904c5-bb8c-4534-a12c-723f228bcf32\" (UID: \"2b6904c5-bb8c-4534-a12c-723f228bcf32\") " Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.487347 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b6904c5-bb8c-4534-a12c-723f228bcf32-kube-api-access-xhr4f" (OuterVolumeSpecName: "kube-api-access-xhr4f") pod "2b6904c5-bb8c-4534-a12c-723f228bcf32" (UID: "2b6904c5-bb8c-4534-a12c-723f228bcf32"). InnerVolumeSpecName "kube-api-access-xhr4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.501221 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b6904c5-bb8c-4534-a12c-723f228bcf32-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2b6904c5-bb8c-4534-a12c-723f228bcf32" (UID: "2b6904c5-bb8c-4534-a12c-723f228bcf32"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.509060 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b6904c5-bb8c-4534-a12c-723f228bcf32-inventory" (OuterVolumeSpecName: "inventory") pod "2b6904c5-bb8c-4534-a12c-723f228bcf32" (UID: "2b6904c5-bb8c-4534-a12c-723f228bcf32"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.510993 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b6904c5-bb8c-4534-a12c-723f228bcf32-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2b6904c5-bb8c-4534-a12c-723f228bcf32" (UID: "2b6904c5-bb8c-4534-a12c-723f228bcf32"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.586006 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b6904c5-bb8c-4534-a12c-723f228bcf32-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.586062 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhr4f\" (UniqueName: \"kubernetes.io/projected/2b6904c5-bb8c-4534-a12c-723f228bcf32-kube-api-access-xhr4f\") on node \"crc\" DevicePath \"\"" Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.586078 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b6904c5-bb8c-4534-a12c-723f228bcf32-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.586101 4858 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b6904c5-bb8c-4534-a12c-723f228bcf32-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.791033 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf" Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.791819 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf" event={"ID":"2b6904c5-bb8c-4534-a12c-723f228bcf32","Type":"ContainerDied","Data":"04ffb4a3cfe560364e1490ce68b03d5e9c9b028a06499627e0e5eb0546f3a815"} Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.791848 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04ffb4a3cfe560364e1490ce68b03d5e9c9b028a06499627e0e5eb0546f3a815" Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.886946 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nz525"] Feb 18 01:01:14 crc kubenswrapper[4858]: E0218 01:01:14.887416 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2201a764-9f38-4708-b9ef-14515082aae5" containerName="keystone-cron" Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.887434 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2201a764-9f38-4708-b9ef-14515082aae5" containerName="keystone-cron" Feb 18 01:01:14 crc kubenswrapper[4858]: E0218 01:01:14.887503 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b6904c5-bb8c-4534-a12c-723f228bcf32" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.887511 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b6904c5-bb8c-4534-a12c-723f228bcf32" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.887715 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2201a764-9f38-4708-b9ef-14515082aae5" containerName="keystone-cron" Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.887732 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b6904c5-bb8c-4534-a12c-723f228bcf32" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.888459 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nz525" Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.890943 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.891472 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.891780 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ts27" Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.892028 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.904140 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nz525"] Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.993182 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93cfe4a9-20e3-4c13-82bb-7c3c634214ce-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nz525\" (UID: \"93cfe4a9-20e3-4c13-82bb-7c3c634214ce\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nz525" Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.993278 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnbgc\" (UniqueName: \"kubernetes.io/projected/93cfe4a9-20e3-4c13-82bb-7c3c634214ce-kube-api-access-tnbgc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nz525\" (UID: \"93cfe4a9-20e3-4c13-82bb-7c3c634214ce\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nz525" Feb 18 01:01:14 crc kubenswrapper[4858]: I0218 01:01:14.993523 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93cfe4a9-20e3-4c13-82bb-7c3c634214ce-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nz525\" (UID: \"93cfe4a9-20e3-4c13-82bb-7c3c634214ce\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nz525" Feb 18 01:01:15 crc kubenswrapper[4858]: I0218 01:01:15.096304 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93cfe4a9-20e3-4c13-82bb-7c3c634214ce-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nz525\" (UID: \"93cfe4a9-20e3-4c13-82bb-7c3c634214ce\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nz525" Feb 18 01:01:15 crc kubenswrapper[4858]: I0218 01:01:15.096954 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnbgc\" (UniqueName: \"kubernetes.io/projected/93cfe4a9-20e3-4c13-82bb-7c3c634214ce-kube-api-access-tnbgc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nz525\" (UID: \"93cfe4a9-20e3-4c13-82bb-7c3c634214ce\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nz525" Feb 18 01:01:15 crc kubenswrapper[4858]: I0218 01:01:15.097085 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93cfe4a9-20e3-4c13-82bb-7c3c634214ce-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nz525\" (UID: \"93cfe4a9-20e3-4c13-82bb-7c3c634214ce\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nz525" Feb 18 01:01:15 crc kubenswrapper[4858]: I0218 01:01:15.100940 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93cfe4a9-20e3-4c13-82bb-7c3c634214ce-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nz525\" (UID: \"93cfe4a9-20e3-4c13-82bb-7c3c634214ce\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nz525" Feb 18 01:01:15 crc kubenswrapper[4858]: I0218 01:01:15.101651 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93cfe4a9-20e3-4c13-82bb-7c3c634214ce-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nz525\" (UID: \"93cfe4a9-20e3-4c13-82bb-7c3c634214ce\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nz525" Feb 18 01:01:15 crc kubenswrapper[4858]: I0218 01:01:15.114984 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnbgc\" (UniqueName: \"kubernetes.io/projected/93cfe4a9-20e3-4c13-82bb-7c3c634214ce-kube-api-access-tnbgc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nz525\" (UID: \"93cfe4a9-20e3-4c13-82bb-7c3c634214ce\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nz525" Feb 18 01:01:15 crc kubenswrapper[4858]: I0218 01:01:15.206192 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nz525" Feb 18 01:01:15 crc kubenswrapper[4858]: I0218 01:01:15.866382 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nz525"] Feb 18 01:01:16 crc kubenswrapper[4858]: I0218 01:01:16.376036 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-78bq8" Feb 18 01:01:16 crc kubenswrapper[4858]: I0218 01:01:16.376254 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-78bq8" Feb 18 01:01:16 crc kubenswrapper[4858]: I0218 01:01:16.420000 4858 scope.go:117] "RemoveContainer" containerID="7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" Feb 18 01:01:16 crc kubenswrapper[4858]: E0218 01:01:16.420306 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:01:16 crc kubenswrapper[4858]: I0218 01:01:16.453505 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-78bq8" Feb 18 01:01:16 crc kubenswrapper[4858]: I0218 01:01:16.818136 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nz525" event={"ID":"93cfe4a9-20e3-4c13-82bb-7c3c634214ce","Type":"ContainerStarted","Data":"90c0c3ae88d5afaed1a4bd4e2de057ec71ced58a5fbe0c02a3e3b0ee455ba8fc"} Feb 18 01:01:16 crc kubenswrapper[4858]: I0218 01:01:16.819600 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nz525" event={"ID":"93cfe4a9-20e3-4c13-82bb-7c3c634214ce","Type":"ContainerStarted","Data":"024de63752ec30435cc3a7b526290ca0a627790bd4f9c545c00aefa5ef809894"} Feb 18 01:01:16 crc kubenswrapper[4858]: I0218 01:01:16.848304 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nz525" podStartSLOduration=2.453617612 podStartE2EDuration="2.848273803s" podCreationTimestamp="2026-02-18 01:01:14 +0000 UTC" firstStartedPulling="2026-02-18 01:01:15.875664811 +0000 UTC m=+1629.181501553" lastFinishedPulling="2026-02-18 01:01:16.270320972 +0000 UTC m=+1629.576157744" observedRunningTime="2026-02-18 01:01:16.840123675 +0000 UTC m=+1630.145960437" watchObservedRunningTime="2026-02-18 01:01:16.848273803 +0000 UTC m=+1630.154110565" Feb 18 01:01:16 crc kubenswrapper[4858]: I0218 01:01:16.908409 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-78bq8" Feb 18 01:01:17 crc kubenswrapper[4858]: I0218 01:01:17.233157 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-78bq8"] Feb 18 01:01:18 crc kubenswrapper[4858]: I0218 01:01:18.839240 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-78bq8" podUID="8b67b261-8965-44b3-8b00-2ea9e8313d8d" containerName="registry-server" containerID="cri-o://251f3414aaff968267023e4107674dbc85df1fbb4ea585d5d31d2dfea00e353b" gracePeriod=2 Feb 18 01:01:19 crc kubenswrapper[4858]: I0218 01:01:19.437815 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-78bq8" Feb 18 01:01:19 crc kubenswrapper[4858]: I0218 01:01:19.511018 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b67b261-8965-44b3-8b00-2ea9e8313d8d-catalog-content\") pod \"8b67b261-8965-44b3-8b00-2ea9e8313d8d\" (UID: \"8b67b261-8965-44b3-8b00-2ea9e8313d8d\") " Feb 18 01:01:19 crc kubenswrapper[4858]: I0218 01:01:19.511287 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b67b261-8965-44b3-8b00-2ea9e8313d8d-utilities\") pod \"8b67b261-8965-44b3-8b00-2ea9e8313d8d\" (UID: \"8b67b261-8965-44b3-8b00-2ea9e8313d8d\") " Feb 18 01:01:19 crc kubenswrapper[4858]: I0218 01:01:19.511369 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27lgp\" (UniqueName: \"kubernetes.io/projected/8b67b261-8965-44b3-8b00-2ea9e8313d8d-kube-api-access-27lgp\") pod \"8b67b261-8965-44b3-8b00-2ea9e8313d8d\" (UID: \"8b67b261-8965-44b3-8b00-2ea9e8313d8d\") " Feb 18 01:01:19 crc kubenswrapper[4858]: I0218 01:01:19.513678 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b67b261-8965-44b3-8b00-2ea9e8313d8d-utilities" (OuterVolumeSpecName: "utilities") pod "8b67b261-8965-44b3-8b00-2ea9e8313d8d" (UID: "8b67b261-8965-44b3-8b00-2ea9e8313d8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:01:19 crc kubenswrapper[4858]: I0218 01:01:19.519693 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b67b261-8965-44b3-8b00-2ea9e8313d8d-kube-api-access-27lgp" (OuterVolumeSpecName: "kube-api-access-27lgp") pod "8b67b261-8965-44b3-8b00-2ea9e8313d8d" (UID: "8b67b261-8965-44b3-8b00-2ea9e8313d8d"). InnerVolumeSpecName "kube-api-access-27lgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:01:19 crc kubenswrapper[4858]: I0218 01:01:19.576565 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b67b261-8965-44b3-8b00-2ea9e8313d8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b67b261-8965-44b3-8b00-2ea9e8313d8d" (UID: "8b67b261-8965-44b3-8b00-2ea9e8313d8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:01:19 crc kubenswrapper[4858]: I0218 01:01:19.614562 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b67b261-8965-44b3-8b00-2ea9e8313d8d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:01:19 crc kubenswrapper[4858]: I0218 01:01:19.614629 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27lgp\" (UniqueName: \"kubernetes.io/projected/8b67b261-8965-44b3-8b00-2ea9e8313d8d-kube-api-access-27lgp\") on node \"crc\" DevicePath \"\"" Feb 18 01:01:19 crc kubenswrapper[4858]: I0218 01:01:19.614647 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b67b261-8965-44b3-8b00-2ea9e8313d8d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:01:19 crc kubenswrapper[4858]: I0218 01:01:19.856713 4858 generic.go:334] "Generic (PLEG): container finished" podID="8b67b261-8965-44b3-8b00-2ea9e8313d8d" containerID="251f3414aaff968267023e4107674dbc85df1fbb4ea585d5d31d2dfea00e353b" exitCode=0 Feb 18 01:01:19 crc kubenswrapper[4858]: I0218 01:01:19.856757 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78bq8" event={"ID":"8b67b261-8965-44b3-8b00-2ea9e8313d8d","Type":"ContainerDied","Data":"251f3414aaff968267023e4107674dbc85df1fbb4ea585d5d31d2dfea00e353b"} Feb 18 01:01:19 crc kubenswrapper[4858]: I0218 01:01:19.856786 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78bq8" event={"ID":"8b67b261-8965-44b3-8b00-2ea9e8313d8d","Type":"ContainerDied","Data":"a201382331646b91ae673257461029fcf4759feb4df436c31ec8217e54b1b961"} Feb 18 01:01:19 crc kubenswrapper[4858]: I0218 01:01:19.856807 4858 scope.go:117] "RemoveContainer" containerID="251f3414aaff968267023e4107674dbc85df1fbb4ea585d5d31d2dfea00e353b" Feb 18 01:01:19 crc kubenswrapper[4858]: I0218 01:01:19.856966 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-78bq8" Feb 18 01:01:19 crc kubenswrapper[4858]: I0218 01:01:19.908706 4858 scope.go:117] "RemoveContainer" containerID="1581693edcf548cb8684fa0ec0bd4aa933d905c24b3c3dfb7623105daf961c84" Feb 18 01:01:19 crc kubenswrapper[4858]: I0218 01:01:19.910432 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-78bq8"] Feb 18 01:01:19 crc kubenswrapper[4858]: I0218 01:01:19.924299 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-78bq8"] Feb 18 01:01:19 crc kubenswrapper[4858]: I0218 01:01:19.938645 4858 scope.go:117] "RemoveContainer" containerID="f32761ba081c928cef27dced76cce315494a22b78f225913edca101701dcc3ca" Feb 18 01:01:20 crc kubenswrapper[4858]: I0218 01:01:20.003569 4858 scope.go:117] "RemoveContainer" containerID="251f3414aaff968267023e4107674dbc85df1fbb4ea585d5d31d2dfea00e353b" Feb 18 01:01:20 crc kubenswrapper[4858]: E0218 01:01:20.004219 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"251f3414aaff968267023e4107674dbc85df1fbb4ea585d5d31d2dfea00e353b\": container with ID starting with 251f3414aaff968267023e4107674dbc85df1fbb4ea585d5d31d2dfea00e353b not found: ID does not exist" containerID="251f3414aaff968267023e4107674dbc85df1fbb4ea585d5d31d2dfea00e353b" Feb 18 01:01:20 crc kubenswrapper[4858]: I0218 01:01:20.004276 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"251f3414aaff968267023e4107674dbc85df1fbb4ea585d5d31d2dfea00e353b"} err="failed to get container status \"251f3414aaff968267023e4107674dbc85df1fbb4ea585d5d31d2dfea00e353b\": rpc error: code = NotFound desc = could not find container \"251f3414aaff968267023e4107674dbc85df1fbb4ea585d5d31d2dfea00e353b\": container with ID starting with 251f3414aaff968267023e4107674dbc85df1fbb4ea585d5d31d2dfea00e353b not found: ID does not exist" Feb 18 01:01:20 crc kubenswrapper[4858]: I0218 01:01:20.004308 4858 scope.go:117] "RemoveContainer" containerID="1581693edcf548cb8684fa0ec0bd4aa933d905c24b3c3dfb7623105daf961c84" Feb 18 01:01:20 crc kubenswrapper[4858]: E0218 01:01:20.005319 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1581693edcf548cb8684fa0ec0bd4aa933d905c24b3c3dfb7623105daf961c84\": container with ID starting with 1581693edcf548cb8684fa0ec0bd4aa933d905c24b3c3dfb7623105daf961c84 not found: ID does not exist" containerID="1581693edcf548cb8684fa0ec0bd4aa933d905c24b3c3dfb7623105daf961c84" Feb 18 01:01:20 crc kubenswrapper[4858]: I0218 01:01:20.005360 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1581693edcf548cb8684fa0ec0bd4aa933d905c24b3c3dfb7623105daf961c84"} err="failed to get container status \"1581693edcf548cb8684fa0ec0bd4aa933d905c24b3c3dfb7623105daf961c84\": rpc error: code = NotFound desc = could not find container \"1581693edcf548cb8684fa0ec0bd4aa933d905c24b3c3dfb7623105daf961c84\": container with ID starting with 1581693edcf548cb8684fa0ec0bd4aa933d905c24b3c3dfb7623105daf961c84 not found: ID does not exist" Feb 18 01:01:20 crc kubenswrapper[4858]: I0218 01:01:20.005386 4858 scope.go:117] "RemoveContainer" containerID="f32761ba081c928cef27dced76cce315494a22b78f225913edca101701dcc3ca" Feb 18 01:01:20 crc kubenswrapper[4858]: E0218 01:01:20.005774 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f32761ba081c928cef27dced76cce315494a22b78f225913edca101701dcc3ca\": container with ID starting with f32761ba081c928cef27dced76cce315494a22b78f225913edca101701dcc3ca not found: ID does not exist" containerID="f32761ba081c928cef27dced76cce315494a22b78f225913edca101701dcc3ca" Feb 18 01:01:20 crc kubenswrapper[4858]: I0218 01:01:20.005807 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f32761ba081c928cef27dced76cce315494a22b78f225913edca101701dcc3ca"} err="failed to get container status \"f32761ba081c928cef27dced76cce315494a22b78f225913edca101701dcc3ca\": rpc error: code = NotFound desc = could not find container \"f32761ba081c928cef27dced76cce315494a22b78f225913edca101701dcc3ca\": container with ID starting with f32761ba081c928cef27dced76cce315494a22b78f225913edca101701dcc3ca not found: ID does not exist" Feb 18 01:01:20 crc kubenswrapper[4858]: I0218 01:01:20.140977 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r6xtq" Feb 18 01:01:20 crc kubenswrapper[4858]: I0218 01:01:20.141039 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r6xtq" Feb 18 01:01:20 crc kubenswrapper[4858]: I0218 01:01:20.191567 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r6xtq" Feb 18 01:01:20 crc kubenswrapper[4858]: E0218 01:01:20.422228 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:01:20 crc kubenswrapper[4858]: I0218 01:01:20.969458 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r6xtq" Feb 18 01:01:21 crc kubenswrapper[4858]: I0218 01:01:21.437067 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b67b261-8965-44b3-8b00-2ea9e8313d8d" path="/var/lib/kubelet/pods/8b67b261-8965-44b3-8b00-2ea9e8313d8d/volumes" Feb 18 01:01:23 crc kubenswrapper[4858]: I0218 01:01:23.239306 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r6xtq"] Feb 18 01:01:23 crc kubenswrapper[4858]: I0218 01:01:23.239871 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r6xtq" podUID="b9202969-1b3a-4cc4-8c44-097c54b79cf0" containerName="registry-server" containerID="cri-o://c0a4ad85428ef7275269488fb893f9f571aefb52e5bf57ce58bf5e84c165ceed" gracePeriod=2 Feb 18 01:01:23 crc kubenswrapper[4858]: I0218 01:01:23.922257 4858 generic.go:334] "Generic (PLEG): container finished" podID="b9202969-1b3a-4cc4-8c44-097c54b79cf0" containerID="c0a4ad85428ef7275269488fb893f9f571aefb52e5bf57ce58bf5e84c165ceed" exitCode=0 Feb 18 01:01:23 crc kubenswrapper[4858]: I0218 01:01:23.922327 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6xtq" event={"ID":"b9202969-1b3a-4cc4-8c44-097c54b79cf0","Type":"ContainerDied","Data":"c0a4ad85428ef7275269488fb893f9f571aefb52e5bf57ce58bf5e84c165ceed"} Feb 18 01:01:24 crc kubenswrapper[4858]: I0218 01:01:24.012471 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r6xtq" Feb 18 01:01:24 crc kubenswrapper[4858]: I0218 01:01:24.117173 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd8jh\" (UniqueName: \"kubernetes.io/projected/b9202969-1b3a-4cc4-8c44-097c54b79cf0-kube-api-access-vd8jh\") pod \"b9202969-1b3a-4cc4-8c44-097c54b79cf0\" (UID: \"b9202969-1b3a-4cc4-8c44-097c54b79cf0\") " Feb 18 01:01:24 crc kubenswrapper[4858]: I0218 01:01:24.117471 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9202969-1b3a-4cc4-8c44-097c54b79cf0-utilities\") pod \"b9202969-1b3a-4cc4-8c44-097c54b79cf0\" (UID: \"b9202969-1b3a-4cc4-8c44-097c54b79cf0\") " Feb 18 01:01:24 crc kubenswrapper[4858]: I0218 01:01:24.117541 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9202969-1b3a-4cc4-8c44-097c54b79cf0-catalog-content\") pod \"b9202969-1b3a-4cc4-8c44-097c54b79cf0\" (UID: \"b9202969-1b3a-4cc4-8c44-097c54b79cf0\") " Feb 18 01:01:24 crc kubenswrapper[4858]: I0218 01:01:24.118939 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9202969-1b3a-4cc4-8c44-097c54b79cf0-utilities" (OuterVolumeSpecName: "utilities") pod "b9202969-1b3a-4cc4-8c44-097c54b79cf0" (UID: "b9202969-1b3a-4cc4-8c44-097c54b79cf0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:01:24 crc kubenswrapper[4858]: I0218 01:01:24.123368 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9202969-1b3a-4cc4-8c44-097c54b79cf0-kube-api-access-vd8jh" (OuterVolumeSpecName: "kube-api-access-vd8jh") pod "b9202969-1b3a-4cc4-8c44-097c54b79cf0" (UID: "b9202969-1b3a-4cc4-8c44-097c54b79cf0"). InnerVolumeSpecName "kube-api-access-vd8jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:01:24 crc kubenswrapper[4858]: I0218 01:01:24.151721 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9202969-1b3a-4cc4-8c44-097c54b79cf0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9202969-1b3a-4cc4-8c44-097c54b79cf0" (UID: "b9202969-1b3a-4cc4-8c44-097c54b79cf0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:01:24 crc kubenswrapper[4858]: I0218 01:01:24.220463 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd8jh\" (UniqueName: \"kubernetes.io/projected/b9202969-1b3a-4cc4-8c44-097c54b79cf0-kube-api-access-vd8jh\") on node \"crc\" DevicePath \"\"" Feb 18 01:01:24 crc kubenswrapper[4858]: I0218 01:01:24.221113 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9202969-1b3a-4cc4-8c44-097c54b79cf0-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:01:24 crc kubenswrapper[4858]: I0218 01:01:24.221165 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9202969-1b3a-4cc4-8c44-097c54b79cf0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:01:24 crc kubenswrapper[4858]: I0218 01:01:24.941851 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6xtq" event={"ID":"b9202969-1b3a-4cc4-8c44-097c54b79cf0","Type":"ContainerDied","Data":"d5699db470380344df77c66bc45981984cf1963def00d2f360684c3358a77956"} Feb 18 01:01:24 crc kubenswrapper[4858]: I0218 01:01:24.941929 4858 scope.go:117] "RemoveContainer" containerID="c0a4ad85428ef7275269488fb893f9f571aefb52e5bf57ce58bf5e84c165ceed" Feb 18 01:01:24 crc kubenswrapper[4858]: I0218 01:01:24.941957 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r6xtq" Feb 18 01:01:24 crc kubenswrapper[4858]: I0218 01:01:24.969312 4858 scope.go:117] "RemoveContainer" containerID="3675f1af40d11e92ffce2a4a3cfc1a8cb5e428b1c119e8a0e5f026f39af4c051" Feb 18 01:01:25 crc kubenswrapper[4858]: I0218 01:01:25.017621 4858 scope.go:117] "RemoveContainer" containerID="914dfdb582acc02bc9802dbd973c509cf956bbf8dc130bad48d23698b9f04648" Feb 18 01:01:25 crc kubenswrapper[4858]: I0218 01:01:25.018625 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r6xtq"] Feb 18 01:01:25 crc kubenswrapper[4858]: I0218 01:01:25.048721 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r6xtq"] Feb 18 01:01:25 crc kubenswrapper[4858]: I0218 01:01:25.461212 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9202969-1b3a-4cc4-8c44-097c54b79cf0" path="/var/lib/kubelet/pods/b9202969-1b3a-4cc4-8c44-097c54b79cf0/volumes" Feb 18 01:01:28 crc kubenswrapper[4858]: I0218 01:01:28.420066 4858 scope.go:117] "RemoveContainer" containerID="7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" Feb 18 01:01:28 crc kubenswrapper[4858]: E0218 01:01:28.420795 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:01:29 crc kubenswrapper[4858]: E0218 01:01:29.427862 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:01:33 crc kubenswrapper[4858]: E0218 01:01:33.426473 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:01:43 crc kubenswrapper[4858]: I0218 01:01:43.420806 4858 scope.go:117] "RemoveContainer" containerID="7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" Feb 18 01:01:43 crc kubenswrapper[4858]: E0218 01:01:43.421883 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:01:43 crc kubenswrapper[4858]: E0218 01:01:43.423031 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:01:46 crc kubenswrapper[4858]: E0218 01:01:46.423917 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:01:58 crc kubenswrapper[4858]: I0218 01:01:58.419412 4858 scope.go:117] "RemoveContainer" containerID="7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" Feb 18 01:01:58 crc kubenswrapper[4858]: E0218 01:01:58.420086 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:01:58 crc kubenswrapper[4858]: E0218 01:01:58.422596 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:01:59 crc kubenswrapper[4858]: E0218 01:01:59.422824 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:02:09 crc kubenswrapper[4858]: E0218 01:02:09.422331 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:02:10 crc kubenswrapper[4858]: I0218 01:02:10.042547 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-98e5-account-create-update-hhlbm"] Feb 18 01:02:10 crc kubenswrapper[4858]: I0218 01:02:10.053788 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-98e5-account-create-update-hhlbm"] Feb 18 01:02:11 crc kubenswrapper[4858]: I0218 01:02:11.438030 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4682a2f1-a646-4f7e-9b03-578bbe315f48" path="/var/lib/kubelet/pods/4682a2f1-a646-4f7e-9b03-578bbe315f48/volumes" Feb 18 01:02:12 crc kubenswrapper[4858]: I0218 01:02:12.419890 4858 scope.go:117] "RemoveContainer" containerID="7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" Feb 18 01:02:12 crc kubenswrapper[4858]: E0218 01:02:12.420347 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:02:13 crc kubenswrapper[4858]: I0218 01:02:13.052397 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-7pckt"] Feb 18 01:02:13 crc kubenswrapper[4858]: I0218 01:02:13.071917 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-2c0e-account-create-update-4kjnq"] Feb 18 01:02:13 crc kubenswrapper[4858]: I0218 01:02:13.084349 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-7tfnm"] Feb 18 01:02:13 crc kubenswrapper[4858]: I0218 01:02:13.094035 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-5l2bs"] Feb 18 01:02:13 crc kubenswrapper[4858]: I0218 01:02:13.103818 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-7pckt"] Feb 18 01:02:13 crc kubenswrapper[4858]: I0218 01:02:13.114926 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-5l2bs"] Feb 18 01:02:13 crc kubenswrapper[4858]: I0218 01:02:13.126598 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-7tfnm"] Feb 18 01:02:13 crc kubenswrapper[4858]: I0218 01:02:13.139422 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-2c0e-account-create-update-4kjnq"] Feb 18 01:02:13 crc kubenswrapper[4858]: I0218 01:02:13.152646 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-3a23-account-create-update-tl98n"] Feb 18 01:02:13 crc kubenswrapper[4858]: I0218 01:02:13.163749 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-3a23-account-create-update-tl98n"] Feb 18 01:02:13 crc kubenswrapper[4858]: E0218 01:02:13.422308 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:02:13 crc kubenswrapper[4858]: I0218 01:02:13.431087 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bdc4a39-ee6d-47eb-bb82-665a206a9690" path="/var/lib/kubelet/pods/2bdc4a39-ee6d-47eb-bb82-665a206a9690/volumes" Feb 18 01:02:13 crc kubenswrapper[4858]: I0218 01:02:13.431685 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="823f6441-ed95-4f51-82c1-b8063d153460" path="/var/lib/kubelet/pods/823f6441-ed95-4f51-82c1-b8063d153460/volumes" Feb 18 01:02:13 crc kubenswrapper[4858]: I0218 01:02:13.432223 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eba52de-f7c0-4843-941d-20a57d0e012b" path="/var/lib/kubelet/pods/8eba52de-f7c0-4843-941d-20a57d0e012b/volumes" Feb 18 01:02:13 crc kubenswrapper[4858]: I0218 01:02:13.432776 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4c577ca-985a-4041-a06e-f987c0cd3608" path="/var/lib/kubelet/pods/d4c577ca-985a-4041-a06e-f987c0cd3608/volumes" Feb 18 01:02:13 crc kubenswrapper[4858]: I0218 01:02:13.436252 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f78e85b8-d7a3-4b15-991b-6104ba1ffe95" path="/var/lib/kubelet/pods/f78e85b8-d7a3-4b15-991b-6104ba1ffe95/volumes" Feb 18 01:02:23 crc kubenswrapper[4858]: E0218 01:02:23.421585 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:02:24 crc kubenswrapper[4858]: I0218 01:02:24.419749 4858 scope.go:117] "RemoveContainer" containerID="7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" Feb 18 01:02:24 crc kubenswrapper[4858]: E0218 01:02:24.420669 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:02:24 crc kubenswrapper[4858]: E0218 01:02:24.422545 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:02:28 crc kubenswrapper[4858]: I0218 01:02:28.920702 4858 scope.go:117] "RemoveContainer" containerID="2b69a4de4e096880237a7f1a3d7385679f75c319ef580676f605d0998a665853" Feb 18 01:02:28 crc kubenswrapper[4858]: I0218 01:02:28.950330 4858 scope.go:117] "RemoveContainer" containerID="0443d1ea6e1d1456c9f5549b6bef28924e20efcd54eaf8fe392b0298c8eea250" Feb 18 01:02:28 crc kubenswrapper[4858]: I0218 01:02:28.999172 4858 scope.go:117] "RemoveContainer" containerID="2d3fe8f13155b55798497843ec969545b53598a126ec18c8c80342abd5186a0e" Feb 18 01:02:29 crc kubenswrapper[4858]: I0218 01:02:29.048228 4858 scope.go:117] "RemoveContainer" containerID="9e9eea4a7486b91528e077dcbfef2001f0332c3caa34c86366263d261db70bc0" Feb 18 01:02:29 crc kubenswrapper[4858]: I0218 01:02:29.091411 4858 scope.go:117] "RemoveContainer" containerID="b7ccfe17b67f842a2c7787ee0076fd9dce772b920a2c51781a44ce67c5f45cbd" Feb 18 01:02:29 crc kubenswrapper[4858]: I0218 01:02:29.137224 4858 scope.go:117] "RemoveContainer" containerID="41652346b6608509348e5a146d3c7364e62b3ccdacc268a0b7914778d0e36bd5" Feb 18 01:02:29 crc kubenswrapper[4858]: I0218 01:02:29.161976 4858 scope.go:117] "RemoveContainer" containerID="cf56eb1ce15076ec70ab293ad790c0773f7d8ed199c775ea3e24ceb0351914c3" Feb 18 01:02:34 crc kubenswrapper[4858]: I0218 01:02:34.050394 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-kf4zd"] Feb 18 01:02:34 crc kubenswrapper[4858]: I0218 01:02:34.063571 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-kf4zd"] Feb 18 01:02:35 crc kubenswrapper[4858]: I0218 01:02:35.046083 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-create-sllmm"] Feb 18 01:02:35 crc kubenswrapper[4858]: I0218 01:02:35.090634 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-f8p9c"] Feb 18 01:02:35 crc kubenswrapper[4858]: I0218 01:02:35.102196 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-create-sllmm"] Feb 18 01:02:35 crc kubenswrapper[4858]: I0218 01:02:35.113693 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-f8p9c"] Feb 18 01:02:35 crc kubenswrapper[4858]: I0218 01:02:35.439588 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cf515e7-1bb4-4a22-baf6-932d935e26d5" path="/var/lib/kubelet/pods/1cf515e7-1bb4-4a22-baf6-932d935e26d5/volumes" Feb 18 01:02:35 crc kubenswrapper[4858]: I0218 01:02:35.441387 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb91d1f2-1b80-4082-a0a9-067ccadcc3a5" path="/var/lib/kubelet/pods/eb91d1f2-1b80-4082-a0a9-067ccadcc3a5/volumes" Feb 18 01:02:35 crc kubenswrapper[4858]: I0218 01:02:35.443208 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8" path="/var/lib/kubelet/pods/f3ed1a5a-7601-4e7b-94bd-b882d46ddbc8/volumes" Feb 18 01:02:36 crc kubenswrapper[4858]: I0218 01:02:36.419828 4858 scope.go:117] "RemoveContainer" containerID="7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" Feb 18 01:02:36 crc kubenswrapper[4858]: E0218 01:02:36.420185 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:02:37 crc kubenswrapper[4858]: E0218 01:02:37.574180 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 01:02:37 crc kubenswrapper[4858]: E0218 01:02:37.574622 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 01:02:37 crc kubenswrapper[4858]: E0218 01:02:37.574824 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t22t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-h2mps_openstack(8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:02:37 crc kubenswrapper[4858]: E0218 01:02:37.576051 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:02:39 crc kubenswrapper[4858]: E0218 01:02:39.423635 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:02:40 crc kubenswrapper[4858]: I0218 01:02:40.037526 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-dbbf-account-create-update-bvhsg"] Feb 18 01:02:40 crc kubenswrapper[4858]: I0218 01:02:40.047911 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-z5j29"] Feb 18 01:02:40 crc kubenswrapper[4858]: I0218 01:02:40.058937 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-pk7r2"] Feb 18 01:02:40 crc kubenswrapper[4858]: I0218 01:02:40.070582 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-dbbf-account-create-update-bvhsg"] Feb 18 01:02:40 crc kubenswrapper[4858]: I0218 01:02:40.082472 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-z5j29"] Feb 18 01:02:40 crc kubenswrapper[4858]: I0218 01:02:40.093546 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-0196-account-create-update-6d2sl"] Feb 18 01:02:40 crc kubenswrapper[4858]: I0218 01:02:40.102547 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-3e8e-account-create-update-sxgbt"] Feb 18 01:02:40 crc kubenswrapper[4858]: I0218 01:02:40.111001 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-pk7r2"] Feb 18 01:02:40 crc kubenswrapper[4858]: I0218 01:02:40.119356 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-0196-account-create-update-6d2sl"] Feb 18 01:02:40 crc kubenswrapper[4858]: I0218 01:02:40.128941 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c638-account-create-update-cw4s2"] Feb 18 01:02:40 crc kubenswrapper[4858]: I0218 01:02:40.137149 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-3e8e-account-create-update-sxgbt"] Feb 18 01:02:40 crc kubenswrapper[4858]: I0218 01:02:40.145698 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-c638-account-create-update-cw4s2"] Feb 18 01:02:41 crc kubenswrapper[4858]: I0218 01:02:41.431092 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ed5ad30-acfc-4cff-8dfb-a0eb62046780" path="/var/lib/kubelet/pods/0ed5ad30-acfc-4cff-8dfb-a0eb62046780/volumes" Feb 18 01:02:41 crc kubenswrapper[4858]: I0218 01:02:41.431716 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="831ee652-d7d7-4197-946d-ce5456dcc949" path="/var/lib/kubelet/pods/831ee652-d7d7-4197-946d-ce5456dcc949/volumes" Feb 18 01:02:41 crc kubenswrapper[4858]: I0218 01:02:41.432242 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac4a24af-9dad-4e95-a4c0-8296caee70ef" path="/var/lib/kubelet/pods/ac4a24af-9dad-4e95-a4c0-8296caee70ef/volumes" Feb 18 01:02:41 crc kubenswrapper[4858]: I0218 01:02:41.432784 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baf554d2-2987-45e7-9676-2139110e2781" path="/var/lib/kubelet/pods/baf554d2-2987-45e7-9676-2139110e2781/volumes" Feb 18 01:02:41 crc kubenswrapper[4858]: I0218 01:02:41.433895 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcfae652-4782-4fce-85dd-1b25547d3189" path="/var/lib/kubelet/pods/bcfae652-4782-4fce-85dd-1b25547d3189/volumes" Feb 18 01:02:41 crc kubenswrapper[4858]: I0218 01:02:41.434626 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d579ea77-2807-419f-b4f4-558b7cc1a09b" path="/var/lib/kubelet/pods/d579ea77-2807-419f-b4f4-558b7cc1a09b/volumes" Feb 18 01:02:42 crc kubenswrapper[4858]: I0218 01:02:42.044834 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-gjnp4"] Feb 18 01:02:42 crc kubenswrapper[4858]: I0218 01:02:42.056163 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-gjnp4"] Feb 18 01:02:43 crc kubenswrapper[4858]: I0218 01:02:43.433843 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e80e88e1-21eb-46ff-9ee5-d22d3d589ecd" path="/var/lib/kubelet/pods/e80e88e1-21eb-46ff-9ee5-d22d3d589ecd/volumes" Feb 18 01:02:44 crc kubenswrapper[4858]: I0218 01:02:44.046391 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-bb8nr"] Feb 18 01:02:44 crc kubenswrapper[4858]: I0218 01:02:44.060000 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-bb8nr"] Feb 18 01:02:45 crc kubenswrapper[4858]: I0218 01:02:45.438680 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03ab729d-962a-4c7b-8e72-ddf54dd2a69e" path="/var/lib/kubelet/pods/03ab729d-962a-4c7b-8e72-ddf54dd2a69e/volumes" Feb 18 01:02:51 crc kubenswrapper[4858]: I0218 01:02:51.420127 4858 scope.go:117] "RemoveContainer" containerID="7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" Feb 18 01:02:51 crc kubenswrapper[4858]: E0218 01:02:51.421598 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:02:51 crc kubenswrapper[4858]: E0218 01:02:51.422608 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:02:52 crc kubenswrapper[4858]: E0218 01:02:52.547219 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:02:52 crc kubenswrapper[4858]: E0218 01:02:52.547593 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:02:52 crc kubenswrapper[4858]: E0218 01:02:52.547750 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n75h65dh56ch8chbh597h687h696h57bhdchcbhb6h9h648hb6h686h666h57ch557h55ch68ch76h686h5f7hb7hc6h68hdh67fh8bhd7hbcq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6qb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(1b28954c-8d35-4f43-a44b-307a56f6fff5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:02:52 crc kubenswrapper[4858]: E0218 01:02:52.548902 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:03:02 crc kubenswrapper[4858]: I0218 01:03:02.420325 4858 scope.go:117] "RemoveContainer" containerID="7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" Feb 18 01:03:02 crc kubenswrapper[4858]: E0218 01:03:02.421076 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:03:04 crc kubenswrapper[4858]: E0218 01:03:04.420934 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:03:04 crc kubenswrapper[4858]: E0218 01:03:04.421027 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:03:14 crc kubenswrapper[4858]: I0218 01:03:14.420337 4858 scope.go:117] "RemoveContainer" containerID="7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" Feb 18 01:03:14 crc kubenswrapper[4858]: E0218 01:03:14.421377 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:03:16 crc kubenswrapper[4858]: E0218 01:03:16.422190 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:03:16 crc kubenswrapper[4858]: E0218 01:03:16.422311 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:03:17 crc kubenswrapper[4858]: I0218 01:03:17.070555 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-bd9bf"] Feb 18 01:03:17 crc kubenswrapper[4858]: I0218 01:03:17.081587 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-bd9bf"] Feb 18 01:03:17 crc kubenswrapper[4858]: I0218 01:03:17.430555 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="373a01de-9360-4b5b-8f80-fdfc987dddae" path="/var/lib/kubelet/pods/373a01de-9360-4b5b-8f80-fdfc987dddae/volumes" Feb 18 01:03:25 crc kubenswrapper[4858]: I0218 01:03:25.421787 4858 scope.go:117] "RemoveContainer" containerID="7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" Feb 18 01:03:25 crc kubenswrapper[4858]: E0218 01:03:25.423247 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:03:27 crc kubenswrapper[4858]: E0218 01:03:27.436307 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:03:28 crc kubenswrapper[4858]: E0218 01:03:28.422809 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:03:29 crc kubenswrapper[4858]: I0218 01:03:29.369201 4858 scope.go:117] "RemoveContainer" containerID="66e2a7143ec269b65af6a54c5d4fcc131603cd4f471e9e79348c093e1c017834" Feb 18 01:03:29 crc kubenswrapper[4858]: I0218 01:03:29.397651 4858 scope.go:117] "RemoveContainer" containerID="61913bcb3e85eb95cea418f82559fcb765f2055aae35df84fd167dd8dd3ab619" Feb 18 01:03:29 crc kubenswrapper[4858]: I0218 01:03:29.451885 4858 scope.go:117] "RemoveContainer" containerID="e8105054f4e99b0795a9cfd27d2524ebd13021b21d5843c6a1508d6d30ff6e06" Feb 18 01:03:29 crc kubenswrapper[4858]: I0218 01:03:29.504860 4858 scope.go:117] "RemoveContainer" containerID="a7a11cfacf44a691b91f028c671ee7ec6da8c93f112d9cc143fe6c097cdc0c28" Feb 18 01:03:29 crc kubenswrapper[4858]: I0218 01:03:29.571061 4858 scope.go:117] "RemoveContainer" containerID="3910f249d783b5f7cb82afdfd5bf2e171dd09be9d92a70591b56a3f8577cea07" Feb 18 01:03:29 crc kubenswrapper[4858]: I0218 01:03:29.602195 4858 scope.go:117] "RemoveContainer" containerID="46ee678916ae87f94add437390249c8fb6de6e8496a5570841c4409cdaa3d8cf" Feb 18 01:03:29 crc kubenswrapper[4858]: I0218 01:03:29.642091 4858 scope.go:117] "RemoveContainer" containerID="21bef429d3aa782c1b7fe218abf2149149b37bf3fc6ffc07bcd004f77161f0e9" Feb 18 01:03:29 crc kubenswrapper[4858]: I0218 01:03:29.670001 4858 scope.go:117] "RemoveContainer" containerID="3a4127e7fda8b54bab13483b701290c9efc846b2f5e825bdfd95b8c32e5cb226" Feb 18 01:03:29 crc kubenswrapper[4858]: I0218 01:03:29.694156 4858 scope.go:117] "RemoveContainer" containerID="6623e91d53940bbbdcfb297b39182d5e4ab6fa33466f9b27fb4de23b85e0701b" Feb 18 01:03:29 crc kubenswrapper[4858]: I0218 01:03:29.720916 4858 scope.go:117] "RemoveContainer" containerID="6bab1ac8463a6b3e79b00f515110e61c38d6f50857706ff48cded01d65614990" Feb 18 01:03:29 crc kubenswrapper[4858]: I0218 01:03:29.742686 4858 scope.go:117] "RemoveContainer" containerID="ce0c9cc1c5da391638527f8c880d54e1375c55a46e782617bf2c63d319921a9c" Feb 18 01:03:29 crc kubenswrapper[4858]: I0218 01:03:29.780766 4858 scope.go:117] "RemoveContainer" containerID="9a9370ae17661171b0d30c1240331f41a16ba3474aa5719ca1d76ce41b6d1466" Feb 18 01:03:35 crc kubenswrapper[4858]: I0218 01:03:35.052412 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-k2wn6"] Feb 18 01:03:35 crc kubenswrapper[4858]: I0218 01:03:35.063390 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-x4wqp"] Feb 18 01:03:35 crc kubenswrapper[4858]: I0218 01:03:35.071790 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2cphn"] Feb 18 01:03:35 crc kubenswrapper[4858]: I0218 01:03:35.083571 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2cphn"] Feb 18 01:03:35 crc kubenswrapper[4858]: I0218 01:03:35.091572 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-k2wn6"] Feb 18 01:03:35 crc kubenswrapper[4858]: I0218 01:03:35.097402 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-x4wqp"] Feb 18 01:03:35 crc kubenswrapper[4858]: I0218 01:03:35.438705 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27254f13-cc74-43cf-9b54-08d87277de31" path="/var/lib/kubelet/pods/27254f13-cc74-43cf-9b54-08d87277de31/volumes" Feb 18 01:03:35 crc kubenswrapper[4858]: I0218 01:03:35.439245 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="423548cb-6c87-4876-a08c-fd64805971ea" path="/var/lib/kubelet/pods/423548cb-6c87-4876-a08c-fd64805971ea/volumes" Feb 18 01:03:35 crc kubenswrapper[4858]: I0218 01:03:35.439859 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b6ceabb-aac4-48fc-9d11-abbedea94d2d" path="/var/lib/kubelet/pods/8b6ceabb-aac4-48fc-9d11-abbedea94d2d/volumes" Feb 18 01:03:37 crc kubenswrapper[4858]: I0218 01:03:37.427704 4858 scope.go:117] "RemoveContainer" containerID="7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" Feb 18 01:03:37 crc kubenswrapper[4858]: E0218 01:03:37.428182 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:03:39 crc kubenswrapper[4858]: E0218 01:03:39.425204 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:03:40 crc kubenswrapper[4858]: E0218 01:03:40.421720 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:03:50 crc kubenswrapper[4858]: I0218 01:03:50.420092 4858 scope.go:117] "RemoveContainer" containerID="7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" Feb 18 01:03:50 crc kubenswrapper[4858]: E0218 01:03:50.420967 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:03:51 crc kubenswrapper[4858]: I0218 01:03:51.069593 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-wrqgb"] Feb 18 01:03:51 crc kubenswrapper[4858]: I0218 01:03:51.085876 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-wrqgb"] Feb 18 01:03:51 crc kubenswrapper[4858]: I0218 01:03:51.430985 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f69b36cb-f694-4e90-b673-47681459414b" path="/var/lib/kubelet/pods/f69b36cb-f694-4e90-b673-47681459414b/volumes" Feb 18 01:03:52 crc kubenswrapper[4858]: E0218 01:03:52.423490 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:03:53 crc kubenswrapper[4858]: E0218 01:03:53.421137 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:03:59 crc kubenswrapper[4858]: I0218 01:03:59.035079 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-cz4n9"] Feb 18 01:03:59 crc kubenswrapper[4858]: I0218 01:03:59.048724 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-cz4n9"] Feb 18 01:03:59 crc kubenswrapper[4858]: I0218 01:03:59.440756 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbda3331-08bc-49a1-8cf2-f24700bf4a89" path="/var/lib/kubelet/pods/cbda3331-08bc-49a1-8cf2-f24700bf4a89/volumes" Feb 18 01:04:04 crc kubenswrapper[4858]: I0218 01:04:04.419470 4858 scope.go:117] "RemoveContainer" containerID="7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" Feb 18 01:04:04 crc kubenswrapper[4858]: E0218 01:04:04.420325 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:04:06 crc kubenswrapper[4858]: E0218 01:04:06.422779 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:04:07 crc kubenswrapper[4858]: E0218 01:04:07.440791 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:04:18 crc kubenswrapper[4858]: I0218 01:04:18.420781 4858 scope.go:117] "RemoveContainer" containerID="7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" Feb 18 01:04:18 crc kubenswrapper[4858]: E0218 01:04:18.422152 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:04:18 crc kubenswrapper[4858]: E0218 01:04:18.423875 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:04:18 crc kubenswrapper[4858]: E0218 01:04:18.424319 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:04:30 crc kubenswrapper[4858]: I0218 01:04:30.070972 4858 scope.go:117] "RemoveContainer" containerID="1c85fdcbed14012ce2425b4f6a426c0e3b08b72d2006aac0bcc305570620b15d" Feb 18 01:04:30 crc kubenswrapper[4858]: I0218 01:04:30.124021 4858 scope.go:117] "RemoveContainer" containerID="91a75a5f520b435c29457529ec0ca3a2704faf832666424082ed19ae90bc5a4b" Feb 18 01:04:30 crc kubenswrapper[4858]: I0218 01:04:30.196391 4858 scope.go:117] "RemoveContainer" containerID="08dad992e4ffad74a4a43b675f9fb4b6a788c6f31bf74345751b5438743b1ba5" Feb 18 01:04:30 crc kubenswrapper[4858]: I0218 01:04:30.238933 4858 scope.go:117] "RemoveContainer" containerID="f3811c6e5165da31a9b11d83e47c6fd1e6e32765366f2a4949b7e3cba6cc0f9f" Feb 18 01:04:30 crc kubenswrapper[4858]: I0218 01:04:30.281719 4858 scope.go:117] "RemoveContainer" containerID="a083da6369422ac1d40b19b03f96614ec30f4c94c278c782d9005c5565f2464a" Feb 18 01:04:31 crc kubenswrapper[4858]: I0218 01:04:31.419263 4858 scope.go:117] "RemoveContainer" containerID="7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" Feb 18 01:04:31 crc kubenswrapper[4858]: E0218 01:04:31.420005 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:04:31 crc kubenswrapper[4858]: E0218 01:04:31.421033 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:04:33 crc kubenswrapper[4858]: E0218 01:04:33.422811 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:04:43 crc kubenswrapper[4858]: I0218 01:04:43.420576 4858 scope.go:117] "RemoveContainer" containerID="7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" Feb 18 01:04:43 crc kubenswrapper[4858]: E0218 01:04:43.422196 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:04:44 crc kubenswrapper[4858]: E0218 01:04:44.424386 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:04:45 crc kubenswrapper[4858]: E0218 01:04:45.426475 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:04:56 crc kubenswrapper[4858]: I0218 01:04:56.096214 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-bwphg"] Feb 18 01:04:56 crc kubenswrapper[4858]: I0218 01:04:56.104925 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-001d-account-create-update-nb7dp"] Feb 18 01:04:56 crc kubenswrapper[4858]: I0218 01:04:56.114909 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-gbzjd"] Feb 18 01:04:56 crc kubenswrapper[4858]: I0218 01:04:56.124436 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c0da-account-create-update-s7czg"] Feb 18 01:04:56 crc kubenswrapper[4858]: I0218 01:04:56.131893 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-a270-account-create-update-96tgv"] Feb 18 01:04:56 crc kubenswrapper[4858]: I0218 01:04:56.139179 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-bwphg"] Feb 18 01:04:56 crc kubenswrapper[4858]: I0218 01:04:56.148570 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-001d-account-create-update-nb7dp"] Feb 18 01:04:56 crc kubenswrapper[4858]: I0218 01:04:56.155860 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-c0da-account-create-update-s7czg"] Feb 18 01:04:56 crc kubenswrapper[4858]: I0218 01:04:56.163314 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-a270-account-create-update-96tgv"] Feb 18 01:04:56 crc kubenswrapper[4858]: I0218 01:04:56.170077 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-gbzjd"] Feb 18 01:04:56 crc kubenswrapper[4858]: I0218 01:04:56.177162 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-6hn7r"] Feb 18 01:04:56 crc kubenswrapper[4858]: I0218 01:04:56.184760 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-6hn7r"] Feb 18 01:04:56 crc kubenswrapper[4858]: I0218 01:04:56.419869 4858 scope.go:117] "RemoveContainer" containerID="7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" Feb 18 01:04:56 crc kubenswrapper[4858]: I0218 01:04:56.713460 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerStarted","Data":"626db8794e6c6706ae5135270c36b181f2132b11647335d3f141e1f34af4e8c8"} Feb 18 01:04:57 crc kubenswrapper[4858]: E0218 01:04:57.448759 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:04:57 crc kubenswrapper[4858]: I0218 01:04:57.461686 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07f92966-13bb-4fa6-b5d6-388baaf16288" path="/var/lib/kubelet/pods/07f92966-13bb-4fa6-b5d6-388baaf16288/volumes" Feb 18 01:04:57 crc kubenswrapper[4858]: I0218 01:04:57.462451 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12869268-4147-4557-bcaf-c027d1478c29" path="/var/lib/kubelet/pods/12869268-4147-4557-bcaf-c027d1478c29/volumes" Feb 18 01:04:57 crc kubenswrapper[4858]: I0218 01:04:57.463050 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e5d349-2a21-4825-921a-f391f079db96" path="/var/lib/kubelet/pods/25e5d349-2a21-4825-921a-f391f079db96/volumes" Feb 18 01:04:57 crc kubenswrapper[4858]: I0218 01:04:57.463699 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9641f46c-7437-4828-aa73-a35c3c49c06f" path="/var/lib/kubelet/pods/9641f46c-7437-4828-aa73-a35c3c49c06f/volumes" Feb 18 01:04:57 crc kubenswrapper[4858]: I0218 01:04:57.482527 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afbd10d3-a140-407f-b44d-52a42e8dec44" path="/var/lib/kubelet/pods/afbd10d3-a140-407f-b44d-52a42e8dec44/volumes" Feb 18 01:04:57 crc kubenswrapper[4858]: I0218 01:04:57.483107 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f482994d-5817-4411-861c-b9634b40bf88" path="/var/lib/kubelet/pods/f482994d-5817-4411-861c-b9634b40bf88/volumes" Feb 18 01:04:58 crc kubenswrapper[4858]: E0218 01:04:58.422987 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:05:10 crc kubenswrapper[4858]: E0218 01:05:10.422554 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:05:11 crc kubenswrapper[4858]: E0218 01:05:11.422089 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:05:16 crc kubenswrapper[4858]: I0218 01:05:16.063078 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9m2qq"] Feb 18 01:05:16 crc kubenswrapper[4858]: I0218 01:05:16.080931 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9m2qq"] Feb 18 01:05:17 crc kubenswrapper[4858]: I0218 01:05:17.430904 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed63468b-fdca-49b9-b26c-8ab532261519" path="/var/lib/kubelet/pods/ed63468b-fdca-49b9-b26c-8ab532261519/volumes" Feb 18 01:05:24 crc kubenswrapper[4858]: E0218 01:05:24.421291 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:05:26 crc kubenswrapper[4858]: E0218 01:05:26.424641 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:05:30 crc kubenswrapper[4858]: I0218 01:05:30.430073 4858 scope.go:117] "RemoveContainer" containerID="ba9bd69572b54cefe3d5575f3479af10a80b9f15d10ea10d02c75f385c9c4c2e" Feb 18 01:05:30 crc kubenswrapper[4858]: I0218 01:05:30.473484 4858 scope.go:117] "RemoveContainer" containerID="b9d424d283417ed8611b52e8f476cf01a72f2dda2a1f95cc3d94a3214875d11d" Feb 18 01:05:30 crc kubenswrapper[4858]: I0218 01:05:30.559705 4858 scope.go:117] "RemoveContainer" containerID="f0744d27366509bcfb677df37dab469eeee5d9304b2e2ab77bb239c8569e404b" Feb 18 01:05:30 crc kubenswrapper[4858]: I0218 01:05:30.616943 4858 scope.go:117] "RemoveContainer" containerID="db67aca5adefd30832957b7dd1582244533d4b38a35de7f43d8fcc7fa48486e4" Feb 18 01:05:30 crc kubenswrapper[4858]: I0218 01:05:30.666098 4858 scope.go:117] "RemoveContainer" containerID="2f9ecffb0aa4715879c10e46d9d7cb6852814799b8c44e2643390ac4567d7430" Feb 18 01:05:30 crc kubenswrapper[4858]: I0218 01:05:30.714858 4858 scope.go:117] "RemoveContainer" containerID="d23e1c2e015054ff05db92aed4e0c3e9e1226951c591d0221622a92a9e337ffd" Feb 18 01:05:30 crc kubenswrapper[4858]: I0218 01:05:30.795080 4858 scope.go:117] "RemoveContainer" containerID="9fe3a961b055e6ca858f5152bdb53c66edf9e03a9cb23eecb3a98b5fc95d1097" Feb 18 01:05:34 crc kubenswrapper[4858]: I0218 01:05:34.039928 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-9xtk4"] Feb 18 01:05:34 crc kubenswrapper[4858]: I0218 01:05:34.059161 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-9xtk4"] Feb 18 01:05:35 crc kubenswrapper[4858]: I0218 01:05:35.433217 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="570680e8-0b24-4814-a4ea-7f70e5ed1622" path="/var/lib/kubelet/pods/570680e8-0b24-4814-a4ea-7f70e5ed1622/volumes" Feb 18 01:05:36 crc kubenswrapper[4858]: I0218 01:05:36.055870 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qxsd9"] Feb 18 01:05:36 crc kubenswrapper[4858]: I0218 01:05:36.073182 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qxsd9"] Feb 18 01:05:36 crc kubenswrapper[4858]: E0218 01:05:36.427236 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:05:37 crc kubenswrapper[4858]: I0218 01:05:37.441152 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0634c49e-271a-4c92-8313-d974f58cd273" path="/var/lib/kubelet/pods/0634c49e-271a-4c92-8313-d974f58cd273/volumes" Feb 18 01:05:40 crc kubenswrapper[4858]: E0218 01:05:40.423401 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:05:47 crc kubenswrapper[4858]: E0218 01:05:47.427166 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:05:52 crc kubenswrapper[4858]: E0218 01:05:52.423088 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:05:58 crc kubenswrapper[4858]: E0218 01:05:58.421710 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:06:07 crc kubenswrapper[4858]: E0218 01:06:07.433172 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:06:12 crc kubenswrapper[4858]: E0218 01:06:12.421684 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:06:18 crc kubenswrapper[4858]: I0218 01:06:18.046982 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-jbz45"] Feb 18 01:06:18 crc kubenswrapper[4858]: I0218 01:06:18.063319 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-jbz45"] Feb 18 01:06:19 crc kubenswrapper[4858]: I0218 01:06:19.437716 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0cdda4b-3de4-484a-aa99-5ebde30e05d6" path="/var/lib/kubelet/pods/d0cdda4b-3de4-484a-aa99-5ebde30e05d6/volumes" Feb 18 01:06:20 crc kubenswrapper[4858]: I0218 01:06:20.760306 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d7c54"] Feb 18 01:06:20 crc kubenswrapper[4858]: E0218 01:06:20.761018 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b67b261-8965-44b3-8b00-2ea9e8313d8d" containerName="extract-content" Feb 18 01:06:20 crc kubenswrapper[4858]: I0218 01:06:20.761055 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b67b261-8965-44b3-8b00-2ea9e8313d8d" containerName="extract-content" Feb 18 01:06:20 crc kubenswrapper[4858]: E0218 01:06:20.761102 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9202969-1b3a-4cc4-8c44-097c54b79cf0" containerName="extract-content" Feb 18 01:06:20 crc kubenswrapper[4858]: I0218 01:06:20.761114 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9202969-1b3a-4cc4-8c44-097c54b79cf0" containerName="extract-content" Feb 18 01:06:20 crc kubenswrapper[4858]: E0218 01:06:20.761129 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b67b261-8965-44b3-8b00-2ea9e8313d8d" containerName="registry-server" Feb 18 01:06:20 crc kubenswrapper[4858]: I0218 01:06:20.761139 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b67b261-8965-44b3-8b00-2ea9e8313d8d" containerName="registry-server" Feb 18 01:06:20 crc kubenswrapper[4858]: E0218 01:06:20.761157 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9202969-1b3a-4cc4-8c44-097c54b79cf0" containerName="registry-server" Feb 18 01:06:20 crc kubenswrapper[4858]: I0218 01:06:20.761168 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9202969-1b3a-4cc4-8c44-097c54b79cf0" containerName="registry-server" Feb 18 01:06:20 crc kubenswrapper[4858]: E0218 01:06:20.761190 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9202969-1b3a-4cc4-8c44-097c54b79cf0" containerName="extract-utilities" Feb 18 01:06:20 crc kubenswrapper[4858]: I0218 01:06:20.761200 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9202969-1b3a-4cc4-8c44-097c54b79cf0" containerName="extract-utilities" Feb 18 01:06:20 crc kubenswrapper[4858]: E0218 01:06:20.761221 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b67b261-8965-44b3-8b00-2ea9e8313d8d" containerName="extract-utilities" Feb 18 01:06:20 crc kubenswrapper[4858]: I0218 01:06:20.761231 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b67b261-8965-44b3-8b00-2ea9e8313d8d" containerName="extract-utilities" Feb 18 01:06:20 crc kubenswrapper[4858]: I0218 01:06:20.761554 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9202969-1b3a-4cc4-8c44-097c54b79cf0" containerName="registry-server" Feb 18 01:06:20 crc kubenswrapper[4858]: I0218 01:06:20.761596 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b67b261-8965-44b3-8b00-2ea9e8313d8d" containerName="registry-server" Feb 18 01:06:20 crc kubenswrapper[4858]: I0218 01:06:20.763962 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7c54" Feb 18 01:06:20 crc kubenswrapper[4858]: I0218 01:06:20.778162 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d7c54"] Feb 18 01:06:20 crc kubenswrapper[4858]: I0218 01:06:20.798957 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpwwk\" (UniqueName: \"kubernetes.io/projected/f5775f13-0dce-414f-9cea-ecdd4a472f56-kube-api-access-tpwwk\") pod \"redhat-operators-d7c54\" (UID: \"f5775f13-0dce-414f-9cea-ecdd4a472f56\") " pod="openshift-marketplace/redhat-operators-d7c54" Feb 18 01:06:20 crc kubenswrapper[4858]: I0218 01:06:20.799433 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5775f13-0dce-414f-9cea-ecdd4a472f56-catalog-content\") pod \"redhat-operators-d7c54\" (UID: \"f5775f13-0dce-414f-9cea-ecdd4a472f56\") " pod="openshift-marketplace/redhat-operators-d7c54" Feb 18 01:06:20 crc kubenswrapper[4858]: I0218 01:06:20.799751 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5775f13-0dce-414f-9cea-ecdd4a472f56-utilities\") pod \"redhat-operators-d7c54\" (UID: \"f5775f13-0dce-414f-9cea-ecdd4a472f56\") " pod="openshift-marketplace/redhat-operators-d7c54" Feb 18 01:06:20 crc kubenswrapper[4858]: I0218 01:06:20.902467 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5775f13-0dce-414f-9cea-ecdd4a472f56-utilities\") pod \"redhat-operators-d7c54\" (UID: \"f5775f13-0dce-414f-9cea-ecdd4a472f56\") " pod="openshift-marketplace/redhat-operators-d7c54" Feb 18 01:06:20 crc kubenswrapper[4858]: I0218 01:06:20.902653 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpwwk\" (UniqueName: \"kubernetes.io/projected/f5775f13-0dce-414f-9cea-ecdd4a472f56-kube-api-access-tpwwk\") pod \"redhat-operators-d7c54\" (UID: \"f5775f13-0dce-414f-9cea-ecdd4a472f56\") " pod="openshift-marketplace/redhat-operators-d7c54" Feb 18 01:06:20 crc kubenswrapper[4858]: I0218 01:06:20.902730 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5775f13-0dce-414f-9cea-ecdd4a472f56-catalog-content\") pod \"redhat-operators-d7c54\" (UID: \"f5775f13-0dce-414f-9cea-ecdd4a472f56\") " pod="openshift-marketplace/redhat-operators-d7c54" Feb 18 01:06:20 crc kubenswrapper[4858]: I0218 01:06:20.903335 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5775f13-0dce-414f-9cea-ecdd4a472f56-catalog-content\") pod \"redhat-operators-d7c54\" (UID: \"f5775f13-0dce-414f-9cea-ecdd4a472f56\") " pod="openshift-marketplace/redhat-operators-d7c54" Feb 18 01:06:20 crc kubenswrapper[4858]: I0218 01:06:20.904353 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5775f13-0dce-414f-9cea-ecdd4a472f56-utilities\") pod \"redhat-operators-d7c54\" (UID: \"f5775f13-0dce-414f-9cea-ecdd4a472f56\") " pod="openshift-marketplace/redhat-operators-d7c54" Feb 18 01:06:20 crc kubenswrapper[4858]: I0218 01:06:20.933561 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpwwk\" (UniqueName: \"kubernetes.io/projected/f5775f13-0dce-414f-9cea-ecdd4a472f56-kube-api-access-tpwwk\") pod \"redhat-operators-d7c54\" (UID: \"f5775f13-0dce-414f-9cea-ecdd4a472f56\") " pod="openshift-marketplace/redhat-operators-d7c54" Feb 18 01:06:21 crc kubenswrapper[4858]: I0218 01:06:21.101519 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7c54" Feb 18 01:06:21 crc kubenswrapper[4858]: I0218 01:06:21.676543 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d7c54"] Feb 18 01:06:21 crc kubenswrapper[4858]: I0218 01:06:21.709704 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7c54" event={"ID":"f5775f13-0dce-414f-9cea-ecdd4a472f56","Type":"ContainerStarted","Data":"ab20309f401098ec581a20484bcdf1862922afa26f2b85d6f2f9885fefedf461"} Feb 18 01:06:22 crc kubenswrapper[4858]: E0218 01:06:22.420875 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:06:22 crc kubenswrapper[4858]: I0218 01:06:22.722056 4858 generic.go:334] "Generic (PLEG): container finished" podID="f5775f13-0dce-414f-9cea-ecdd4a472f56" containerID="e3427f0ce9adbba6fbf468f86c9d540ce43a69140e9d55cead32950dd78aa940" exitCode=0 Feb 18 01:06:22 crc kubenswrapper[4858]: I0218 01:06:22.722104 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7c54" event={"ID":"f5775f13-0dce-414f-9cea-ecdd4a472f56","Type":"ContainerDied","Data":"e3427f0ce9adbba6fbf468f86c9d540ce43a69140e9d55cead32950dd78aa940"} Feb 18 01:06:22 crc kubenswrapper[4858]: I0218 01:06:22.724950 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 01:06:24 crc kubenswrapper[4858]: I0218 01:06:24.741446 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7c54" event={"ID":"f5775f13-0dce-414f-9cea-ecdd4a472f56","Type":"ContainerStarted","Data":"5163e9244a6777bda32dd5110c390df6d13f6c9c4abdf8d8e65a76cf93f649aa"} Feb 18 01:06:27 crc kubenswrapper[4858]: E0218 01:06:27.431077 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:06:28 crc kubenswrapper[4858]: I0218 01:06:28.783833 4858 generic.go:334] "Generic (PLEG): container finished" podID="f5775f13-0dce-414f-9cea-ecdd4a472f56" containerID="5163e9244a6777bda32dd5110c390df6d13f6c9c4abdf8d8e65a76cf93f649aa" exitCode=0 Feb 18 01:06:28 crc kubenswrapper[4858]: I0218 01:06:28.783906 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7c54" event={"ID":"f5775f13-0dce-414f-9cea-ecdd4a472f56","Type":"ContainerDied","Data":"5163e9244a6777bda32dd5110c390df6d13f6c9c4abdf8d8e65a76cf93f649aa"} Feb 18 01:06:29 crc kubenswrapper[4858]: I0218 01:06:29.794529 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7c54" event={"ID":"f5775f13-0dce-414f-9cea-ecdd4a472f56","Type":"ContainerStarted","Data":"867e5df3664548617546dd181f0e8cbc3746b51c7e81a67220ea6f2fc1e238a7"} Feb 18 01:06:29 crc kubenswrapper[4858]: I0218 01:06:29.843157 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d7c54" podStartSLOduration=3.332997641 podStartE2EDuration="9.843132626s" podCreationTimestamp="2026-02-18 01:06:20 +0000 UTC" firstStartedPulling="2026-02-18 01:06:22.724531163 +0000 UTC m=+1936.030367915" lastFinishedPulling="2026-02-18 01:06:29.234666158 +0000 UTC m=+1942.540502900" observedRunningTime="2026-02-18 01:06:29.81980489 +0000 UTC m=+1943.125641632" watchObservedRunningTime="2026-02-18 01:06:29.843132626 +0000 UTC m=+1943.148969358" Feb 18 01:06:31 crc kubenswrapper[4858]: I0218 01:06:31.029028 4858 scope.go:117] "RemoveContainer" containerID="e6b137dfe882a81d6e61324362a90a5575dfd56132d45fe40921a53ddb6d76ce" Feb 18 01:06:31 crc kubenswrapper[4858]: I0218 01:06:31.102146 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d7c54" Feb 18 01:06:31 crc kubenswrapper[4858]: I0218 01:06:31.102198 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d7c54" Feb 18 01:06:31 crc kubenswrapper[4858]: I0218 01:06:31.113763 4858 scope.go:117] "RemoveContainer" containerID="a9eb4aeee1720de5e6366f161d92fc3dce15ed08e1ea3689bb92ce0608977bb4" Feb 18 01:06:31 crc kubenswrapper[4858]: I0218 01:06:31.152041 4858 scope.go:117] "RemoveContainer" containerID="5dd29d6ab0f5291c6b919cd6b75c1064b0972c5f019703afcf6f4f1952ee5c1a" Feb 18 01:06:32 crc kubenswrapper[4858]: I0218 01:06:32.170129 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d7c54" podUID="f5775f13-0dce-414f-9cea-ecdd4a472f56" containerName="registry-server" probeResult="failure" output=< Feb 18 01:06:32 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Feb 18 01:06:32 crc kubenswrapper[4858]: > Feb 18 01:06:34 crc kubenswrapper[4858]: E0218 01:06:34.423425 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:06:39 crc kubenswrapper[4858]: E0218 01:06:39.422548 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:06:41 crc kubenswrapper[4858]: I0218 01:06:41.163370 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d7c54" Feb 18 01:06:41 crc kubenswrapper[4858]: I0218 01:06:41.229734 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d7c54" Feb 18 01:06:41 crc kubenswrapper[4858]: I0218 01:06:41.407708 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d7c54"] Feb 18 01:06:42 crc kubenswrapper[4858]: I0218 01:06:42.916342 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d7c54" podUID="f5775f13-0dce-414f-9cea-ecdd4a472f56" containerName="registry-server" containerID="cri-o://867e5df3664548617546dd181f0e8cbc3746b51c7e81a67220ea6f2fc1e238a7" gracePeriod=2 Feb 18 01:06:43 crc kubenswrapper[4858]: I0218 01:06:43.485780 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7c54" Feb 18 01:06:43 crc kubenswrapper[4858]: I0218 01:06:43.641376 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpwwk\" (UniqueName: \"kubernetes.io/projected/f5775f13-0dce-414f-9cea-ecdd4a472f56-kube-api-access-tpwwk\") pod \"f5775f13-0dce-414f-9cea-ecdd4a472f56\" (UID: \"f5775f13-0dce-414f-9cea-ecdd4a472f56\") " Feb 18 01:06:43 crc kubenswrapper[4858]: I0218 01:06:43.641484 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5775f13-0dce-414f-9cea-ecdd4a472f56-catalog-content\") pod \"f5775f13-0dce-414f-9cea-ecdd4a472f56\" (UID: \"f5775f13-0dce-414f-9cea-ecdd4a472f56\") " Feb 18 01:06:43 crc kubenswrapper[4858]: I0218 01:06:43.641692 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5775f13-0dce-414f-9cea-ecdd4a472f56-utilities\") pod \"f5775f13-0dce-414f-9cea-ecdd4a472f56\" (UID: \"f5775f13-0dce-414f-9cea-ecdd4a472f56\") " Feb 18 01:06:43 crc kubenswrapper[4858]: I0218 01:06:43.642537 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5775f13-0dce-414f-9cea-ecdd4a472f56-utilities" (OuterVolumeSpecName: "utilities") pod "f5775f13-0dce-414f-9cea-ecdd4a472f56" (UID: "f5775f13-0dce-414f-9cea-ecdd4a472f56"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:06:43 crc kubenswrapper[4858]: I0218 01:06:43.650697 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5775f13-0dce-414f-9cea-ecdd4a472f56-kube-api-access-tpwwk" (OuterVolumeSpecName: "kube-api-access-tpwwk") pod "f5775f13-0dce-414f-9cea-ecdd4a472f56" (UID: "f5775f13-0dce-414f-9cea-ecdd4a472f56"). InnerVolumeSpecName "kube-api-access-tpwwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:06:43 crc kubenswrapper[4858]: I0218 01:06:43.744317 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpwwk\" (UniqueName: \"kubernetes.io/projected/f5775f13-0dce-414f-9cea-ecdd4a472f56-kube-api-access-tpwwk\") on node \"crc\" DevicePath \"\"" Feb 18 01:06:43 crc kubenswrapper[4858]: I0218 01:06:43.744363 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5775f13-0dce-414f-9cea-ecdd4a472f56-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:06:43 crc kubenswrapper[4858]: I0218 01:06:43.755917 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5775f13-0dce-414f-9cea-ecdd4a472f56-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5775f13-0dce-414f-9cea-ecdd4a472f56" (UID: "f5775f13-0dce-414f-9cea-ecdd4a472f56"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:06:43 crc kubenswrapper[4858]: I0218 01:06:43.847752 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5775f13-0dce-414f-9cea-ecdd4a472f56-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:06:43 crc kubenswrapper[4858]: I0218 01:06:43.928046 4858 generic.go:334] "Generic (PLEG): container finished" podID="f5775f13-0dce-414f-9cea-ecdd4a472f56" containerID="867e5df3664548617546dd181f0e8cbc3746b51c7e81a67220ea6f2fc1e238a7" exitCode=0 Feb 18 01:06:43 crc kubenswrapper[4858]: I0218 01:06:43.928126 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7c54" event={"ID":"f5775f13-0dce-414f-9cea-ecdd4a472f56","Type":"ContainerDied","Data":"867e5df3664548617546dd181f0e8cbc3746b51c7e81a67220ea6f2fc1e238a7"} Feb 18 01:06:43 crc kubenswrapper[4858]: I0218 01:06:43.928173 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7c54" event={"ID":"f5775f13-0dce-414f-9cea-ecdd4a472f56","Type":"ContainerDied","Data":"ab20309f401098ec581a20484bcdf1862922afa26f2b85d6f2f9885fefedf461"} Feb 18 01:06:43 crc kubenswrapper[4858]: I0218 01:06:43.928200 4858 scope.go:117] "RemoveContainer" containerID="867e5df3664548617546dd181f0e8cbc3746b51c7e81a67220ea6f2fc1e238a7" Feb 18 01:06:43 crc kubenswrapper[4858]: I0218 01:06:43.928446 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7c54" Feb 18 01:06:43 crc kubenswrapper[4858]: I0218 01:06:43.959735 4858 scope.go:117] "RemoveContainer" containerID="5163e9244a6777bda32dd5110c390df6d13f6c9c4abdf8d8e65a76cf93f649aa" Feb 18 01:06:43 crc kubenswrapper[4858]: I0218 01:06:43.980218 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d7c54"] Feb 18 01:06:43 crc kubenswrapper[4858]: I0218 01:06:43.997568 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d7c54"] Feb 18 01:06:43 crc kubenswrapper[4858]: I0218 01:06:43.999484 4858 scope.go:117] "RemoveContainer" containerID="e3427f0ce9adbba6fbf468f86c9d540ce43a69140e9d55cead32950dd78aa940" Feb 18 01:06:44 crc kubenswrapper[4858]: I0218 01:06:44.040562 4858 scope.go:117] "RemoveContainer" containerID="867e5df3664548617546dd181f0e8cbc3746b51c7e81a67220ea6f2fc1e238a7" Feb 18 01:06:44 crc kubenswrapper[4858]: E0218 01:06:44.041105 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"867e5df3664548617546dd181f0e8cbc3746b51c7e81a67220ea6f2fc1e238a7\": container with ID starting with 867e5df3664548617546dd181f0e8cbc3746b51c7e81a67220ea6f2fc1e238a7 not found: ID does not exist" containerID="867e5df3664548617546dd181f0e8cbc3746b51c7e81a67220ea6f2fc1e238a7" Feb 18 01:06:44 crc kubenswrapper[4858]: I0218 01:06:44.041150 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"867e5df3664548617546dd181f0e8cbc3746b51c7e81a67220ea6f2fc1e238a7"} err="failed to get container status \"867e5df3664548617546dd181f0e8cbc3746b51c7e81a67220ea6f2fc1e238a7\": rpc error: code = NotFound desc = could not find container \"867e5df3664548617546dd181f0e8cbc3746b51c7e81a67220ea6f2fc1e238a7\": container with ID starting with 867e5df3664548617546dd181f0e8cbc3746b51c7e81a67220ea6f2fc1e238a7 not found: ID does not exist" Feb 18 01:06:44 crc kubenswrapper[4858]: I0218 01:06:44.041179 4858 scope.go:117] "RemoveContainer" containerID="5163e9244a6777bda32dd5110c390df6d13f6c9c4abdf8d8e65a76cf93f649aa" Feb 18 01:06:44 crc kubenswrapper[4858]: E0218 01:06:44.041854 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5163e9244a6777bda32dd5110c390df6d13f6c9c4abdf8d8e65a76cf93f649aa\": container with ID starting with 5163e9244a6777bda32dd5110c390df6d13f6c9c4abdf8d8e65a76cf93f649aa not found: ID does not exist" containerID="5163e9244a6777bda32dd5110c390df6d13f6c9c4abdf8d8e65a76cf93f649aa" Feb 18 01:06:44 crc kubenswrapper[4858]: I0218 01:06:44.041970 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5163e9244a6777bda32dd5110c390df6d13f6c9c4abdf8d8e65a76cf93f649aa"} err="failed to get container status \"5163e9244a6777bda32dd5110c390df6d13f6c9c4abdf8d8e65a76cf93f649aa\": rpc error: code = NotFound desc = could not find container \"5163e9244a6777bda32dd5110c390df6d13f6c9c4abdf8d8e65a76cf93f649aa\": container with ID starting with 5163e9244a6777bda32dd5110c390df6d13f6c9c4abdf8d8e65a76cf93f649aa not found: ID does not exist" Feb 18 01:06:44 crc kubenswrapper[4858]: I0218 01:06:44.042106 4858 scope.go:117] "RemoveContainer" containerID="e3427f0ce9adbba6fbf468f86c9d540ce43a69140e9d55cead32950dd78aa940" Feb 18 01:06:44 crc kubenswrapper[4858]: E0218 01:06:44.042480 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3427f0ce9adbba6fbf468f86c9d540ce43a69140e9d55cead32950dd78aa940\": container with ID starting with e3427f0ce9adbba6fbf468f86c9d540ce43a69140e9d55cead32950dd78aa940 not found: ID does not exist" containerID="e3427f0ce9adbba6fbf468f86c9d540ce43a69140e9d55cead32950dd78aa940" Feb 18 01:06:44 crc kubenswrapper[4858]: I0218 01:06:44.042528 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3427f0ce9adbba6fbf468f86c9d540ce43a69140e9d55cead32950dd78aa940"} err="failed to get container status \"e3427f0ce9adbba6fbf468f86c9d540ce43a69140e9d55cead32950dd78aa940\": rpc error: code = NotFound desc = could not find container \"e3427f0ce9adbba6fbf468f86c9d540ce43a69140e9d55cead32950dd78aa940\": container with ID starting with e3427f0ce9adbba6fbf468f86c9d540ce43a69140e9d55cead32950dd78aa940 not found: ID does not exist" Feb 18 01:06:45 crc kubenswrapper[4858]: E0218 01:06:45.422405 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:06:45 crc kubenswrapper[4858]: I0218 01:06:45.439467 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5775f13-0dce-414f-9cea-ecdd4a472f56" path="/var/lib/kubelet/pods/f5775f13-0dce-414f-9cea-ecdd4a472f56/volumes" Feb 18 01:06:51 crc kubenswrapper[4858]: E0218 01:06:51.421629 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:06:59 crc kubenswrapper[4858]: E0218 01:06:59.424113 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:07:03 crc kubenswrapper[4858]: E0218 01:07:03.424045 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:07:13 crc kubenswrapper[4858]: E0218 01:07:13.425975 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:07:17 crc kubenswrapper[4858]: E0218 01:07:17.427468 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:07:24 crc kubenswrapper[4858]: E0218 01:07:24.422053 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:07:25 crc kubenswrapper[4858]: I0218 01:07:25.265900 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:07:25 crc kubenswrapper[4858]: I0218 01:07:25.266273 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:07:30 crc kubenswrapper[4858]: E0218 01:07:30.423379 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:07:35 crc kubenswrapper[4858]: E0218 01:07:35.422715 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:07:45 crc kubenswrapper[4858]: E0218 01:07:45.542368 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 01:07:45 crc kubenswrapper[4858]: E0218 01:07:45.543573 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 01:07:45 crc kubenswrapper[4858]: E0218 01:07:45.543697 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t22t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-h2mps_openstack(8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:07:45 crc kubenswrapper[4858]: E0218 01:07:45.544900 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:07:46 crc kubenswrapper[4858]: E0218 01:07:46.421741 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:07:55 crc kubenswrapper[4858]: I0218 01:07:55.265627 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:07:55 crc kubenswrapper[4858]: I0218 01:07:55.266341 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:07:56 crc kubenswrapper[4858]: E0218 01:07:56.424279 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:07:57 crc kubenswrapper[4858]: E0218 01:07:57.549749 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:07:57 crc kubenswrapper[4858]: E0218 01:07:57.550045 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:07:57 crc kubenswrapper[4858]: E0218 01:07:57.550172 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n75h65dh56ch8chbh597h687h696h57bhdchcbhb6h9h648hb6h686h666h57ch557h55ch68ch76h686h5f7hb7hc6h68hdh67fh8bhd7hbcq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6qb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(1b28954c-8d35-4f43-a44b-307a56f6fff5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:07:57 crc kubenswrapper[4858]: E0218 01:07:57.551615 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:08:08 crc kubenswrapper[4858]: E0218 01:08:08.421991 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:08:12 crc kubenswrapper[4858]: E0218 01:08:12.424621 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:08:13 crc kubenswrapper[4858]: I0218 01:08:13.940588 4858 generic.go:334] "Generic (PLEG): container finished" podID="93cfe4a9-20e3-4c13-82bb-7c3c634214ce" containerID="90c0c3ae88d5afaed1a4bd4e2de057ec71ced58a5fbe0c02a3e3b0ee455ba8fc" exitCode=2 Feb 18 01:08:13 crc kubenswrapper[4858]: I0218 01:08:13.940694 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nz525" event={"ID":"93cfe4a9-20e3-4c13-82bb-7c3c634214ce","Type":"ContainerDied","Data":"90c0c3ae88d5afaed1a4bd4e2de057ec71ced58a5fbe0c02a3e3b0ee455ba8fc"} Feb 18 01:08:15 crc kubenswrapper[4858]: I0218 01:08:15.555369 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nz525" Feb 18 01:08:15 crc kubenswrapper[4858]: I0218 01:08:15.711118 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnbgc\" (UniqueName: \"kubernetes.io/projected/93cfe4a9-20e3-4c13-82bb-7c3c634214ce-kube-api-access-tnbgc\") pod \"93cfe4a9-20e3-4c13-82bb-7c3c634214ce\" (UID: \"93cfe4a9-20e3-4c13-82bb-7c3c634214ce\") " Feb 18 01:08:15 crc kubenswrapper[4858]: I0218 01:08:15.711476 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93cfe4a9-20e3-4c13-82bb-7c3c634214ce-ssh-key-openstack-edpm-ipam\") pod \"93cfe4a9-20e3-4c13-82bb-7c3c634214ce\" (UID: \"93cfe4a9-20e3-4c13-82bb-7c3c634214ce\") " Feb 18 01:08:15 crc kubenswrapper[4858]: I0218 01:08:15.711644 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93cfe4a9-20e3-4c13-82bb-7c3c634214ce-inventory\") pod \"93cfe4a9-20e3-4c13-82bb-7c3c634214ce\" (UID: \"93cfe4a9-20e3-4c13-82bb-7c3c634214ce\") " Feb 18 01:08:15 crc kubenswrapper[4858]: I0218 01:08:15.718615 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93cfe4a9-20e3-4c13-82bb-7c3c634214ce-kube-api-access-tnbgc" (OuterVolumeSpecName: "kube-api-access-tnbgc") pod "93cfe4a9-20e3-4c13-82bb-7c3c634214ce" (UID: "93cfe4a9-20e3-4c13-82bb-7c3c634214ce"). InnerVolumeSpecName "kube-api-access-tnbgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:08:15 crc kubenswrapper[4858]: I0218 01:08:15.745704 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93cfe4a9-20e3-4c13-82bb-7c3c634214ce-inventory" (OuterVolumeSpecName: "inventory") pod "93cfe4a9-20e3-4c13-82bb-7c3c634214ce" (UID: "93cfe4a9-20e3-4c13-82bb-7c3c634214ce"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:08:15 crc kubenswrapper[4858]: I0218 01:08:15.766465 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93cfe4a9-20e3-4c13-82bb-7c3c634214ce-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "93cfe4a9-20e3-4c13-82bb-7c3c634214ce" (UID: "93cfe4a9-20e3-4c13-82bb-7c3c634214ce"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:08:15 crc kubenswrapper[4858]: I0218 01:08:15.814912 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93cfe4a9-20e3-4c13-82bb-7c3c634214ce-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 01:08:15 crc kubenswrapper[4858]: I0218 01:08:15.815205 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnbgc\" (UniqueName: \"kubernetes.io/projected/93cfe4a9-20e3-4c13-82bb-7c3c634214ce-kube-api-access-tnbgc\") on node \"crc\" DevicePath \"\"" Feb 18 01:08:15 crc kubenswrapper[4858]: I0218 01:08:15.815227 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93cfe4a9-20e3-4c13-82bb-7c3c634214ce-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 01:08:16 crc kubenswrapper[4858]: I0218 01:08:16.011982 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nz525" event={"ID":"93cfe4a9-20e3-4c13-82bb-7c3c634214ce","Type":"ContainerDied","Data":"024de63752ec30435cc3a7b526290ca0a627790bd4f9c545c00aefa5ef809894"} Feb 18 01:08:16 crc kubenswrapper[4858]: I0218 01:08:16.012178 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="024de63752ec30435cc3a7b526290ca0a627790bd4f9c545c00aefa5ef809894" Feb 18 01:08:16 crc kubenswrapper[4858]: I0218 01:08:16.012244 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nz525" Feb 18 01:08:21 crc kubenswrapper[4858]: E0218 01:08:21.432453 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:08:23 crc kubenswrapper[4858]: I0218 01:08:23.027435 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp"] Feb 18 01:08:23 crc kubenswrapper[4858]: E0218 01:08:23.028529 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5775f13-0dce-414f-9cea-ecdd4a472f56" containerName="extract-utilities" Feb 18 01:08:23 crc kubenswrapper[4858]: I0218 01:08:23.028603 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5775f13-0dce-414f-9cea-ecdd4a472f56" containerName="extract-utilities" Feb 18 01:08:23 crc kubenswrapper[4858]: E0218 01:08:23.028686 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93cfe4a9-20e3-4c13-82bb-7c3c634214ce" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 01:08:23 crc kubenswrapper[4858]: I0218 01:08:23.028745 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="93cfe4a9-20e3-4c13-82bb-7c3c634214ce" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 01:08:23 crc kubenswrapper[4858]: E0218 01:08:23.028824 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5775f13-0dce-414f-9cea-ecdd4a472f56" containerName="extract-content" Feb 18 01:08:23 crc kubenswrapper[4858]: I0218 01:08:23.028877 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5775f13-0dce-414f-9cea-ecdd4a472f56" containerName="extract-content" Feb 18 01:08:23 crc kubenswrapper[4858]: E0218 01:08:23.028942 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5775f13-0dce-414f-9cea-ecdd4a472f56" containerName="registry-server" Feb 18 01:08:23 crc kubenswrapper[4858]: I0218 01:08:23.028994 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5775f13-0dce-414f-9cea-ecdd4a472f56" containerName="registry-server" Feb 18 01:08:23 crc kubenswrapper[4858]: I0218 01:08:23.029233 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="93cfe4a9-20e3-4c13-82bb-7c3c634214ce" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 01:08:23 crc kubenswrapper[4858]: I0218 01:08:23.029297 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5775f13-0dce-414f-9cea-ecdd4a472f56" containerName="registry-server" Feb 18 01:08:23 crc kubenswrapper[4858]: I0218 01:08:23.030021 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp" Feb 18 01:08:23 crc kubenswrapper[4858]: I0218 01:08:23.032711 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ts27" Feb 18 01:08:23 crc kubenswrapper[4858]: I0218 01:08:23.032885 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 01:08:23 crc kubenswrapper[4858]: I0218 01:08:23.037706 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 01:08:23 crc kubenswrapper[4858]: I0218 01:08:23.037765 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 01:08:23 crc kubenswrapper[4858]: I0218 01:08:23.044677 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp"] Feb 18 01:08:23 crc kubenswrapper[4858]: I0218 01:08:23.081318 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84f1880d-a959-4d42-85c2-bf04e0268fda-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp\" (UID: \"84f1880d-a959-4d42-85c2-bf04e0268fda\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp" Feb 18 01:08:23 crc kubenswrapper[4858]: I0218 01:08:23.081379 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84f1880d-a959-4d42-85c2-bf04e0268fda-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp\" (UID: \"84f1880d-a959-4d42-85c2-bf04e0268fda\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp" Feb 18 01:08:23 crc kubenswrapper[4858]: I0218 01:08:23.081809 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd2kn\" (UniqueName: \"kubernetes.io/projected/84f1880d-a959-4d42-85c2-bf04e0268fda-kube-api-access-rd2kn\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp\" (UID: \"84f1880d-a959-4d42-85c2-bf04e0268fda\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp" Feb 18 01:08:23 crc kubenswrapper[4858]: I0218 01:08:23.184098 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd2kn\" (UniqueName: \"kubernetes.io/projected/84f1880d-a959-4d42-85c2-bf04e0268fda-kube-api-access-rd2kn\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp\" (UID: \"84f1880d-a959-4d42-85c2-bf04e0268fda\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp" Feb 18 01:08:23 crc kubenswrapper[4858]: I0218 01:08:23.184321 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84f1880d-a959-4d42-85c2-bf04e0268fda-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp\" (UID: \"84f1880d-a959-4d42-85c2-bf04e0268fda\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp" Feb 18 01:08:23 crc kubenswrapper[4858]: I0218 01:08:23.184435 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84f1880d-a959-4d42-85c2-bf04e0268fda-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp\" (UID: \"84f1880d-a959-4d42-85c2-bf04e0268fda\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp" Feb 18 01:08:23 crc kubenswrapper[4858]: I0218 01:08:23.191245 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84f1880d-a959-4d42-85c2-bf04e0268fda-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp\" (UID: \"84f1880d-a959-4d42-85c2-bf04e0268fda\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp" Feb 18 01:08:23 crc kubenswrapper[4858]: I0218 01:08:23.195835 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84f1880d-a959-4d42-85c2-bf04e0268fda-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp\" (UID: \"84f1880d-a959-4d42-85c2-bf04e0268fda\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp" Feb 18 01:08:23 crc kubenswrapper[4858]: I0218 01:08:23.208151 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd2kn\" (UniqueName: \"kubernetes.io/projected/84f1880d-a959-4d42-85c2-bf04e0268fda-kube-api-access-rd2kn\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp\" (UID: \"84f1880d-a959-4d42-85c2-bf04e0268fda\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp" Feb 18 01:08:23 crc kubenswrapper[4858]: I0218 01:08:23.349829 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp" Feb 18 01:08:23 crc kubenswrapper[4858]: I0218 01:08:23.999003 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp"] Feb 18 01:08:23 crc kubenswrapper[4858]: W0218 01:08:23.999889 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84f1880d_a959_4d42_85c2_bf04e0268fda.slice/crio-0a6161e4ab5e0f146256500847f5c1d11cf4d0bba67e3a87c12606e0504e97bd WatchSource:0}: Error finding container 0a6161e4ab5e0f146256500847f5c1d11cf4d0bba67e3a87c12606e0504e97bd: Status 404 returned error can't find the container with id 0a6161e4ab5e0f146256500847f5c1d11cf4d0bba67e3a87c12606e0504e97bd Feb 18 01:08:24 crc kubenswrapper[4858]: I0218 01:08:24.096162 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp" event={"ID":"84f1880d-a959-4d42-85c2-bf04e0268fda","Type":"ContainerStarted","Data":"0a6161e4ab5e0f146256500847f5c1d11cf4d0bba67e3a87c12606e0504e97bd"} Feb 18 01:08:25 crc kubenswrapper[4858]: I0218 01:08:25.112344 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp" event={"ID":"84f1880d-a959-4d42-85c2-bf04e0268fda","Type":"ContainerStarted","Data":"20de882475476f5d70ad1502b446259c784acf88e74bf52d0164b1a94445bdc6"} Feb 18 01:08:25 crc kubenswrapper[4858]: I0218 01:08:25.145084 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp" podStartSLOduration=1.68999277 podStartE2EDuration="2.145059959s" podCreationTimestamp="2026-02-18 01:08:23 +0000 UTC" firstStartedPulling="2026-02-18 01:08:24.002191536 +0000 UTC m=+2057.308028258" lastFinishedPulling="2026-02-18 01:08:24.457258715 +0000 UTC m=+2057.763095447" observedRunningTime="2026-02-18 01:08:25.133697933 +0000 UTC m=+2058.439534675" watchObservedRunningTime="2026-02-18 01:08:25.145059959 +0000 UTC m=+2058.450896701" Feb 18 01:08:25 crc kubenswrapper[4858]: I0218 01:08:25.265196 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:08:25 crc kubenswrapper[4858]: I0218 01:08:25.265333 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:08:25 crc kubenswrapper[4858]: I0218 01:08:25.265416 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 01:08:25 crc kubenswrapper[4858]: I0218 01:08:25.266726 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"626db8794e6c6706ae5135270c36b181f2132b11647335d3f141e1f34af4e8c8"} pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 01:08:25 crc kubenswrapper[4858]: I0218 01:08:25.266859 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" containerID="cri-o://626db8794e6c6706ae5135270c36b181f2132b11647335d3f141e1f34af4e8c8" gracePeriod=600 Feb 18 01:08:26 crc kubenswrapper[4858]: I0218 01:08:26.123278 4858 generic.go:334] "Generic (PLEG): container finished" podID="7172df49-6116-4968-a2b5-a1afb116568b" containerID="626db8794e6c6706ae5135270c36b181f2132b11647335d3f141e1f34af4e8c8" exitCode=0 Feb 18 01:08:26 crc kubenswrapper[4858]: I0218 01:08:26.123334 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerDied","Data":"626db8794e6c6706ae5135270c36b181f2132b11647335d3f141e1f34af4e8c8"} Feb 18 01:08:26 crc kubenswrapper[4858]: I0218 01:08:26.124493 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerStarted","Data":"45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc"} Feb 18 01:08:26 crc kubenswrapper[4858]: I0218 01:08:26.124539 4858 scope.go:117] "RemoveContainer" containerID="7c0d3ce7d62a7401a658646c29006b36ed10522d953311db80a3680645ec76ab" Feb 18 01:08:27 crc kubenswrapper[4858]: E0218 01:08:27.436427 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:08:33 crc kubenswrapper[4858]: E0218 01:08:33.422350 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:08:36 crc kubenswrapper[4858]: I0218 01:08:36.677565 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m7tnh"] Feb 18 01:08:36 crc kubenswrapper[4858]: I0218 01:08:36.688123 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m7tnh" Feb 18 01:08:36 crc kubenswrapper[4858]: I0218 01:08:36.696116 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m7tnh"] Feb 18 01:08:36 crc kubenswrapper[4858]: I0218 01:08:36.823945 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lx2s\" (UniqueName: \"kubernetes.io/projected/5300dbd0-bbd4-4d37-b17c-b1ca870adfb8-kube-api-access-7lx2s\") pod \"certified-operators-m7tnh\" (UID: \"5300dbd0-bbd4-4d37-b17c-b1ca870adfb8\") " pod="openshift-marketplace/certified-operators-m7tnh" Feb 18 01:08:36 crc kubenswrapper[4858]: I0218 01:08:36.824056 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5300dbd0-bbd4-4d37-b17c-b1ca870adfb8-utilities\") pod \"certified-operators-m7tnh\" (UID: \"5300dbd0-bbd4-4d37-b17c-b1ca870adfb8\") " pod="openshift-marketplace/certified-operators-m7tnh" Feb 18 01:08:36 crc kubenswrapper[4858]: I0218 01:08:36.824313 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5300dbd0-bbd4-4d37-b17c-b1ca870adfb8-catalog-content\") pod \"certified-operators-m7tnh\" (UID: \"5300dbd0-bbd4-4d37-b17c-b1ca870adfb8\") " pod="openshift-marketplace/certified-operators-m7tnh" Feb 18 01:08:36 crc kubenswrapper[4858]: I0218 01:08:36.925834 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lx2s\" (UniqueName: \"kubernetes.io/projected/5300dbd0-bbd4-4d37-b17c-b1ca870adfb8-kube-api-access-7lx2s\") pod \"certified-operators-m7tnh\" (UID: \"5300dbd0-bbd4-4d37-b17c-b1ca870adfb8\") " pod="openshift-marketplace/certified-operators-m7tnh" Feb 18 01:08:36 crc kubenswrapper[4858]: I0218 01:08:36.925945 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5300dbd0-bbd4-4d37-b17c-b1ca870adfb8-utilities\") pod \"certified-operators-m7tnh\" (UID: \"5300dbd0-bbd4-4d37-b17c-b1ca870adfb8\") " pod="openshift-marketplace/certified-operators-m7tnh" Feb 18 01:08:36 crc kubenswrapper[4858]: I0218 01:08:36.926016 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5300dbd0-bbd4-4d37-b17c-b1ca870adfb8-catalog-content\") pod \"certified-operators-m7tnh\" (UID: \"5300dbd0-bbd4-4d37-b17c-b1ca870adfb8\") " pod="openshift-marketplace/certified-operators-m7tnh" Feb 18 01:08:36 crc kubenswrapper[4858]: I0218 01:08:36.926596 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5300dbd0-bbd4-4d37-b17c-b1ca870adfb8-catalog-content\") pod \"certified-operators-m7tnh\" (UID: \"5300dbd0-bbd4-4d37-b17c-b1ca870adfb8\") " pod="openshift-marketplace/certified-operators-m7tnh" Feb 18 01:08:36 crc kubenswrapper[4858]: I0218 01:08:36.926598 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5300dbd0-bbd4-4d37-b17c-b1ca870adfb8-utilities\") pod \"certified-operators-m7tnh\" (UID: \"5300dbd0-bbd4-4d37-b17c-b1ca870adfb8\") " pod="openshift-marketplace/certified-operators-m7tnh" Feb 18 01:08:36 crc kubenswrapper[4858]: I0218 01:08:36.950762 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lx2s\" (UniqueName: \"kubernetes.io/projected/5300dbd0-bbd4-4d37-b17c-b1ca870adfb8-kube-api-access-7lx2s\") pod \"certified-operators-m7tnh\" (UID: \"5300dbd0-bbd4-4d37-b17c-b1ca870adfb8\") " pod="openshift-marketplace/certified-operators-m7tnh" Feb 18 01:08:37 crc kubenswrapper[4858]: I0218 01:08:37.005227 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m7tnh" Feb 18 01:08:37 crc kubenswrapper[4858]: I0218 01:08:37.516041 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m7tnh"] Feb 18 01:08:38 crc kubenswrapper[4858]: I0218 01:08:38.305058 4858 generic.go:334] "Generic (PLEG): container finished" podID="5300dbd0-bbd4-4d37-b17c-b1ca870adfb8" containerID="b26fb4ed52a5c3aa5a278fd0b248aaaa5726176ea1d367fe6704d0023e314a9a" exitCode=0 Feb 18 01:08:38 crc kubenswrapper[4858]: I0218 01:08:38.305334 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m7tnh" event={"ID":"5300dbd0-bbd4-4d37-b17c-b1ca870adfb8","Type":"ContainerDied","Data":"b26fb4ed52a5c3aa5a278fd0b248aaaa5726176ea1d367fe6704d0023e314a9a"} Feb 18 01:08:38 crc kubenswrapper[4858]: I0218 01:08:38.305377 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m7tnh" event={"ID":"5300dbd0-bbd4-4d37-b17c-b1ca870adfb8","Type":"ContainerStarted","Data":"fdfdfbb5b5a36b38caae2e922598670c530213f30db2936dc17ee7f45b2b7bc6"} Feb 18 01:08:38 crc kubenswrapper[4858]: E0218 01:08:38.421303 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:08:39 crc kubenswrapper[4858]: I0218 01:08:39.339855 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m7tnh" event={"ID":"5300dbd0-bbd4-4d37-b17c-b1ca870adfb8","Type":"ContainerStarted","Data":"d6332607731d9af6f9b4f78f15e13c91aaa30a650cfcd007df964bf3394f5bbc"} Feb 18 01:08:41 crc kubenswrapper[4858]: I0218 01:08:41.370674 4858 generic.go:334] "Generic (PLEG): container finished" podID="5300dbd0-bbd4-4d37-b17c-b1ca870adfb8" containerID="d6332607731d9af6f9b4f78f15e13c91aaa30a650cfcd007df964bf3394f5bbc" exitCode=0 Feb 18 01:08:41 crc kubenswrapper[4858]: I0218 01:08:41.370771 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m7tnh" event={"ID":"5300dbd0-bbd4-4d37-b17c-b1ca870adfb8","Type":"ContainerDied","Data":"d6332607731d9af6f9b4f78f15e13c91aaa30a650cfcd007df964bf3394f5bbc"} Feb 18 01:08:42 crc kubenswrapper[4858]: I0218 01:08:42.382427 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m7tnh" event={"ID":"5300dbd0-bbd4-4d37-b17c-b1ca870adfb8","Type":"ContainerStarted","Data":"fccfcf14b1b49e57116452fb5811fc738676e3b170a18ef5aaabf727c87581b1"} Feb 18 01:08:42 crc kubenswrapper[4858]: I0218 01:08:42.413673 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m7tnh" podStartSLOduration=2.9651081440000002 podStartE2EDuration="6.413648318s" podCreationTimestamp="2026-02-18 01:08:36 +0000 UTC" firstStartedPulling="2026-02-18 01:08:38.31837463 +0000 UTC m=+2071.624211362" lastFinishedPulling="2026-02-18 01:08:41.766914784 +0000 UTC m=+2075.072751536" observedRunningTime="2026-02-18 01:08:42.411472857 +0000 UTC m=+2075.717309609" watchObservedRunningTime="2026-02-18 01:08:42.413648318 +0000 UTC m=+2075.719485090" Feb 18 01:08:44 crc kubenswrapper[4858]: E0218 01:08:44.421953 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:08:47 crc kubenswrapper[4858]: I0218 01:08:47.006273 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m7tnh" Feb 18 01:08:47 crc kubenswrapper[4858]: I0218 01:08:47.007354 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m7tnh" Feb 18 01:08:47 crc kubenswrapper[4858]: I0218 01:08:47.092571 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m7tnh" Feb 18 01:08:47 crc kubenswrapper[4858]: I0218 01:08:47.488092 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m7tnh" Feb 18 01:08:48 crc kubenswrapper[4858]: I0218 01:08:48.273316 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m7tnh"] Feb 18 01:08:49 crc kubenswrapper[4858]: E0218 01:08:49.423533 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:08:49 crc kubenswrapper[4858]: I0218 01:08:49.472249 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m7tnh" podUID="5300dbd0-bbd4-4d37-b17c-b1ca870adfb8" containerName="registry-server" containerID="cri-o://fccfcf14b1b49e57116452fb5811fc738676e3b170a18ef5aaabf727c87581b1" gracePeriod=2 Feb 18 01:08:50 crc kubenswrapper[4858]: I0218 01:08:50.048217 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m7tnh" Feb 18 01:08:50 crc kubenswrapper[4858]: I0218 01:08:50.151330 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5300dbd0-bbd4-4d37-b17c-b1ca870adfb8-utilities\") pod \"5300dbd0-bbd4-4d37-b17c-b1ca870adfb8\" (UID: \"5300dbd0-bbd4-4d37-b17c-b1ca870adfb8\") " Feb 18 01:08:50 crc kubenswrapper[4858]: I0218 01:08:50.151403 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lx2s\" (UniqueName: \"kubernetes.io/projected/5300dbd0-bbd4-4d37-b17c-b1ca870adfb8-kube-api-access-7lx2s\") pod \"5300dbd0-bbd4-4d37-b17c-b1ca870adfb8\" (UID: \"5300dbd0-bbd4-4d37-b17c-b1ca870adfb8\") " Feb 18 01:08:50 crc kubenswrapper[4858]: I0218 01:08:50.151550 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5300dbd0-bbd4-4d37-b17c-b1ca870adfb8-catalog-content\") pod \"5300dbd0-bbd4-4d37-b17c-b1ca870adfb8\" (UID: \"5300dbd0-bbd4-4d37-b17c-b1ca870adfb8\") " Feb 18 01:08:50 crc kubenswrapper[4858]: I0218 01:08:50.152226 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5300dbd0-bbd4-4d37-b17c-b1ca870adfb8-utilities" (OuterVolumeSpecName: "utilities") pod "5300dbd0-bbd4-4d37-b17c-b1ca870adfb8" (UID: "5300dbd0-bbd4-4d37-b17c-b1ca870adfb8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:08:50 crc kubenswrapper[4858]: I0218 01:08:50.157890 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5300dbd0-bbd4-4d37-b17c-b1ca870adfb8-kube-api-access-7lx2s" (OuterVolumeSpecName: "kube-api-access-7lx2s") pod "5300dbd0-bbd4-4d37-b17c-b1ca870adfb8" (UID: "5300dbd0-bbd4-4d37-b17c-b1ca870adfb8"). InnerVolumeSpecName "kube-api-access-7lx2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:08:50 crc kubenswrapper[4858]: I0218 01:08:50.204275 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5300dbd0-bbd4-4d37-b17c-b1ca870adfb8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5300dbd0-bbd4-4d37-b17c-b1ca870adfb8" (UID: "5300dbd0-bbd4-4d37-b17c-b1ca870adfb8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:08:50 crc kubenswrapper[4858]: I0218 01:08:50.253783 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5300dbd0-bbd4-4d37-b17c-b1ca870adfb8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:08:50 crc kubenswrapper[4858]: I0218 01:08:50.253807 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5300dbd0-bbd4-4d37-b17c-b1ca870adfb8-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:08:50 crc kubenswrapper[4858]: I0218 01:08:50.253817 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lx2s\" (UniqueName: \"kubernetes.io/projected/5300dbd0-bbd4-4d37-b17c-b1ca870adfb8-kube-api-access-7lx2s\") on node \"crc\" DevicePath \"\"" Feb 18 01:08:50 crc kubenswrapper[4858]: I0218 01:08:50.484213 4858 generic.go:334] "Generic (PLEG): container finished" podID="5300dbd0-bbd4-4d37-b17c-b1ca870adfb8" containerID="fccfcf14b1b49e57116452fb5811fc738676e3b170a18ef5aaabf727c87581b1" exitCode=0 Feb 18 01:08:50 crc kubenswrapper[4858]: I0218 01:08:50.484258 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m7tnh" event={"ID":"5300dbd0-bbd4-4d37-b17c-b1ca870adfb8","Type":"ContainerDied","Data":"fccfcf14b1b49e57116452fb5811fc738676e3b170a18ef5aaabf727c87581b1"} Feb 18 01:08:50 crc kubenswrapper[4858]: I0218 01:08:50.485345 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m7tnh" event={"ID":"5300dbd0-bbd4-4d37-b17c-b1ca870adfb8","Type":"ContainerDied","Data":"fdfdfbb5b5a36b38caae2e922598670c530213f30db2936dc17ee7f45b2b7bc6"} Feb 18 01:08:50 crc kubenswrapper[4858]: I0218 01:08:50.484307 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m7tnh" Feb 18 01:08:50 crc kubenswrapper[4858]: I0218 01:08:50.485381 4858 scope.go:117] "RemoveContainer" containerID="fccfcf14b1b49e57116452fb5811fc738676e3b170a18ef5aaabf727c87581b1" Feb 18 01:08:50 crc kubenswrapper[4858]: I0218 01:08:50.513383 4858 scope.go:117] "RemoveContainer" containerID="d6332607731d9af6f9b4f78f15e13c91aaa30a650cfcd007df964bf3394f5bbc" Feb 18 01:08:50 crc kubenswrapper[4858]: I0218 01:08:50.523548 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m7tnh"] Feb 18 01:08:50 crc kubenswrapper[4858]: I0218 01:08:50.533549 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m7tnh"] Feb 18 01:08:50 crc kubenswrapper[4858]: I0218 01:08:50.545169 4858 scope.go:117] "RemoveContainer" containerID="b26fb4ed52a5c3aa5a278fd0b248aaaa5726176ea1d367fe6704d0023e314a9a" Feb 18 01:08:50 crc kubenswrapper[4858]: I0218 01:08:50.607784 4858 scope.go:117] "RemoveContainer" containerID="fccfcf14b1b49e57116452fb5811fc738676e3b170a18ef5aaabf727c87581b1" Feb 18 01:08:50 crc kubenswrapper[4858]: E0218 01:08:50.608578 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fccfcf14b1b49e57116452fb5811fc738676e3b170a18ef5aaabf727c87581b1\": container with ID starting with fccfcf14b1b49e57116452fb5811fc738676e3b170a18ef5aaabf727c87581b1 not found: ID does not exist" containerID="fccfcf14b1b49e57116452fb5811fc738676e3b170a18ef5aaabf727c87581b1" Feb 18 01:08:50 crc kubenswrapper[4858]: I0218 01:08:50.608646 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fccfcf14b1b49e57116452fb5811fc738676e3b170a18ef5aaabf727c87581b1"} err="failed to get container status \"fccfcf14b1b49e57116452fb5811fc738676e3b170a18ef5aaabf727c87581b1\": rpc error: code = NotFound desc = could not find container \"fccfcf14b1b49e57116452fb5811fc738676e3b170a18ef5aaabf727c87581b1\": container with ID starting with fccfcf14b1b49e57116452fb5811fc738676e3b170a18ef5aaabf727c87581b1 not found: ID does not exist" Feb 18 01:08:50 crc kubenswrapper[4858]: I0218 01:08:50.608685 4858 scope.go:117] "RemoveContainer" containerID="d6332607731d9af6f9b4f78f15e13c91aaa30a650cfcd007df964bf3394f5bbc" Feb 18 01:08:50 crc kubenswrapper[4858]: E0218 01:08:50.609616 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6332607731d9af6f9b4f78f15e13c91aaa30a650cfcd007df964bf3394f5bbc\": container with ID starting with d6332607731d9af6f9b4f78f15e13c91aaa30a650cfcd007df964bf3394f5bbc not found: ID does not exist" containerID="d6332607731d9af6f9b4f78f15e13c91aaa30a650cfcd007df964bf3394f5bbc" Feb 18 01:08:50 crc kubenswrapper[4858]: I0218 01:08:50.609652 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6332607731d9af6f9b4f78f15e13c91aaa30a650cfcd007df964bf3394f5bbc"} err="failed to get container status \"d6332607731d9af6f9b4f78f15e13c91aaa30a650cfcd007df964bf3394f5bbc\": rpc error: code = NotFound desc = could not find container \"d6332607731d9af6f9b4f78f15e13c91aaa30a650cfcd007df964bf3394f5bbc\": container with ID starting with d6332607731d9af6f9b4f78f15e13c91aaa30a650cfcd007df964bf3394f5bbc not found: ID does not exist" Feb 18 01:08:50 crc kubenswrapper[4858]: I0218 01:08:50.609672 4858 scope.go:117] "RemoveContainer" containerID="b26fb4ed52a5c3aa5a278fd0b248aaaa5726176ea1d367fe6704d0023e314a9a" Feb 18 01:08:50 crc kubenswrapper[4858]: E0218 01:08:50.609973 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b26fb4ed52a5c3aa5a278fd0b248aaaa5726176ea1d367fe6704d0023e314a9a\": container with ID starting with b26fb4ed52a5c3aa5a278fd0b248aaaa5726176ea1d367fe6704d0023e314a9a not found: ID does not exist" containerID="b26fb4ed52a5c3aa5a278fd0b248aaaa5726176ea1d367fe6704d0023e314a9a" Feb 18 01:08:50 crc kubenswrapper[4858]: I0218 01:08:50.610005 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b26fb4ed52a5c3aa5a278fd0b248aaaa5726176ea1d367fe6704d0023e314a9a"} err="failed to get container status \"b26fb4ed52a5c3aa5a278fd0b248aaaa5726176ea1d367fe6704d0023e314a9a\": rpc error: code = NotFound desc = could not find container \"b26fb4ed52a5c3aa5a278fd0b248aaaa5726176ea1d367fe6704d0023e314a9a\": container with ID starting with b26fb4ed52a5c3aa5a278fd0b248aaaa5726176ea1d367fe6704d0023e314a9a not found: ID does not exist" Feb 18 01:08:51 crc kubenswrapper[4858]: I0218 01:08:51.434349 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5300dbd0-bbd4-4d37-b17c-b1ca870adfb8" path="/var/lib/kubelet/pods/5300dbd0-bbd4-4d37-b17c-b1ca870adfb8/volumes" Feb 18 01:08:55 crc kubenswrapper[4858]: E0218 01:08:55.422241 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:09:04 crc kubenswrapper[4858]: E0218 01:09:04.424296 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:09:09 crc kubenswrapper[4858]: E0218 01:09:09.422731 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:09:18 crc kubenswrapper[4858]: E0218 01:09:18.423772 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:09:22 crc kubenswrapper[4858]: E0218 01:09:22.425219 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:09:30 crc kubenswrapper[4858]: E0218 01:09:30.422562 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:09:36 crc kubenswrapper[4858]: E0218 01:09:36.423570 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:09:41 crc kubenswrapper[4858]: E0218 01:09:41.421858 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:09:48 crc kubenswrapper[4858]: E0218 01:09:48.422485 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:09:56 crc kubenswrapper[4858]: E0218 01:09:56.423133 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:10:02 crc kubenswrapper[4858]: E0218 01:10:02.423623 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:10:10 crc kubenswrapper[4858]: E0218 01:10:10.422407 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:10:13 crc kubenswrapper[4858]: E0218 01:10:13.420517 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:10:23 crc kubenswrapper[4858]: E0218 01:10:23.422128 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:10:24 crc kubenswrapper[4858]: E0218 01:10:24.424919 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:10:25 crc kubenswrapper[4858]: I0218 01:10:25.266069 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:10:25 crc kubenswrapper[4858]: I0218 01:10:25.266478 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:10:35 crc kubenswrapper[4858]: E0218 01:10:35.425977 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:10:36 crc kubenswrapper[4858]: E0218 01:10:36.422410 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:10:47 crc kubenswrapper[4858]: E0218 01:10:47.437633 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:10:50 crc kubenswrapper[4858]: E0218 01:10:50.422241 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:10:55 crc kubenswrapper[4858]: I0218 01:10:55.266907 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:10:55 crc kubenswrapper[4858]: I0218 01:10:55.267519 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:11:02 crc kubenswrapper[4858]: E0218 01:11:02.421873 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:11:05 crc kubenswrapper[4858]: E0218 01:11:05.421077 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:11:13 crc kubenswrapper[4858]: I0218 01:11:13.230163 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n6g55"] Feb 18 01:11:13 crc kubenswrapper[4858]: E0218 01:11:13.231838 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5300dbd0-bbd4-4d37-b17c-b1ca870adfb8" containerName="extract-content" Feb 18 01:11:13 crc kubenswrapper[4858]: I0218 01:11:13.232066 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5300dbd0-bbd4-4d37-b17c-b1ca870adfb8" containerName="extract-content" Feb 18 01:11:13 crc kubenswrapper[4858]: E0218 01:11:13.232124 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5300dbd0-bbd4-4d37-b17c-b1ca870adfb8" containerName="registry-server" Feb 18 01:11:13 crc kubenswrapper[4858]: I0218 01:11:13.232141 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5300dbd0-bbd4-4d37-b17c-b1ca870adfb8" containerName="registry-server" Feb 18 01:11:13 crc kubenswrapper[4858]: E0218 01:11:13.232189 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5300dbd0-bbd4-4d37-b17c-b1ca870adfb8" containerName="extract-utilities" Feb 18 01:11:13 crc kubenswrapper[4858]: I0218 01:11:13.232210 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5300dbd0-bbd4-4d37-b17c-b1ca870adfb8" containerName="extract-utilities" Feb 18 01:11:13 crc kubenswrapper[4858]: I0218 01:11:13.232748 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5300dbd0-bbd4-4d37-b17c-b1ca870adfb8" containerName="registry-server" Feb 18 01:11:13 crc kubenswrapper[4858]: I0218 01:11:13.236800 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n6g55" Feb 18 01:11:13 crc kubenswrapper[4858]: I0218 01:11:13.245866 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n6g55"] Feb 18 01:11:13 crc kubenswrapper[4858]: I0218 01:11:13.411574 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d96b6b-ee60-4ea8-b885-8425805ca628-utilities\") pod \"community-operators-n6g55\" (UID: \"b6d96b6b-ee60-4ea8-b885-8425805ca628\") " pod="openshift-marketplace/community-operators-n6g55" Feb 18 01:11:13 crc kubenswrapper[4858]: I0218 01:11:13.411617 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwz5n\" (UniqueName: \"kubernetes.io/projected/b6d96b6b-ee60-4ea8-b885-8425805ca628-kube-api-access-bwz5n\") pod \"community-operators-n6g55\" (UID: \"b6d96b6b-ee60-4ea8-b885-8425805ca628\") " pod="openshift-marketplace/community-operators-n6g55" Feb 18 01:11:13 crc kubenswrapper[4858]: I0218 01:11:13.411762 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d96b6b-ee60-4ea8-b885-8425805ca628-catalog-content\") pod \"community-operators-n6g55\" (UID: \"b6d96b6b-ee60-4ea8-b885-8425805ca628\") " pod="openshift-marketplace/community-operators-n6g55" Feb 18 01:11:13 crc kubenswrapper[4858]: E0218 01:11:13.420621 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:11:13 crc kubenswrapper[4858]: I0218 01:11:13.513602 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d96b6b-ee60-4ea8-b885-8425805ca628-utilities\") pod \"community-operators-n6g55\" (UID: \"b6d96b6b-ee60-4ea8-b885-8425805ca628\") " pod="openshift-marketplace/community-operators-n6g55" Feb 18 01:11:13 crc kubenswrapper[4858]: I0218 01:11:13.513647 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwz5n\" (UniqueName: \"kubernetes.io/projected/b6d96b6b-ee60-4ea8-b885-8425805ca628-kube-api-access-bwz5n\") pod \"community-operators-n6g55\" (UID: \"b6d96b6b-ee60-4ea8-b885-8425805ca628\") " pod="openshift-marketplace/community-operators-n6g55" Feb 18 01:11:13 crc kubenswrapper[4858]: I0218 01:11:13.514115 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d96b6b-ee60-4ea8-b885-8425805ca628-catalog-content\") pod \"community-operators-n6g55\" (UID: \"b6d96b6b-ee60-4ea8-b885-8425805ca628\") " pod="openshift-marketplace/community-operators-n6g55" Feb 18 01:11:13 crc kubenswrapper[4858]: I0218 01:11:13.514525 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d96b6b-ee60-4ea8-b885-8425805ca628-catalog-content\") pod \"community-operators-n6g55\" (UID: \"b6d96b6b-ee60-4ea8-b885-8425805ca628\") " pod="openshift-marketplace/community-operators-n6g55" Feb 18 01:11:13 crc kubenswrapper[4858]: I0218 01:11:13.514743 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d96b6b-ee60-4ea8-b885-8425805ca628-utilities\") pod \"community-operators-n6g55\" (UID: \"b6d96b6b-ee60-4ea8-b885-8425805ca628\") " pod="openshift-marketplace/community-operators-n6g55" Feb 18 01:11:13 crc kubenswrapper[4858]: I0218 01:11:13.534181 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwz5n\" (UniqueName: \"kubernetes.io/projected/b6d96b6b-ee60-4ea8-b885-8425805ca628-kube-api-access-bwz5n\") pod \"community-operators-n6g55\" (UID: \"b6d96b6b-ee60-4ea8-b885-8425805ca628\") " pod="openshift-marketplace/community-operators-n6g55" Feb 18 01:11:13 crc kubenswrapper[4858]: I0218 01:11:13.573410 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n6g55" Feb 18 01:11:14 crc kubenswrapper[4858]: W0218 01:11:14.133069 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6d96b6b_ee60_4ea8_b885_8425805ca628.slice/crio-9b426a8fcaa4aac28fd436e09fa4db225792021e9e5f5baeaa3ef8ed28afa404 WatchSource:0}: Error finding container 9b426a8fcaa4aac28fd436e09fa4db225792021e9e5f5baeaa3ef8ed28afa404: Status 404 returned error can't find the container with id 9b426a8fcaa4aac28fd436e09fa4db225792021e9e5f5baeaa3ef8ed28afa404 Feb 18 01:11:14 crc kubenswrapper[4858]: I0218 01:11:14.135467 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n6g55"] Feb 18 01:11:15 crc kubenswrapper[4858]: I0218 01:11:15.007739 4858 generic.go:334] "Generic (PLEG): container finished" podID="b6d96b6b-ee60-4ea8-b885-8425805ca628" containerID="0038ab8821d7a29611e9416886f1c4f58b1eafce5dac869553943a3f2444dd21" exitCode=0 Feb 18 01:11:15 crc kubenswrapper[4858]: I0218 01:11:15.007991 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6g55" event={"ID":"b6d96b6b-ee60-4ea8-b885-8425805ca628","Type":"ContainerDied","Data":"0038ab8821d7a29611e9416886f1c4f58b1eafce5dac869553943a3f2444dd21"} Feb 18 01:11:15 crc kubenswrapper[4858]: I0218 01:11:15.008014 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6g55" event={"ID":"b6d96b6b-ee60-4ea8-b885-8425805ca628","Type":"ContainerStarted","Data":"9b426a8fcaa4aac28fd436e09fa4db225792021e9e5f5baeaa3ef8ed28afa404"} Feb 18 01:11:16 crc kubenswrapper[4858]: I0218 01:11:16.023924 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6g55" event={"ID":"b6d96b6b-ee60-4ea8-b885-8425805ca628","Type":"ContainerStarted","Data":"b47243864a79929bc208023ea28762551c558f7302182fab30dbd57afb06eac1"} Feb 18 01:11:16 crc kubenswrapper[4858]: E0218 01:11:16.421631 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:11:18 crc kubenswrapper[4858]: I0218 01:11:18.048924 4858 generic.go:334] "Generic (PLEG): container finished" podID="b6d96b6b-ee60-4ea8-b885-8425805ca628" containerID="b47243864a79929bc208023ea28762551c558f7302182fab30dbd57afb06eac1" exitCode=0 Feb 18 01:11:18 crc kubenswrapper[4858]: I0218 01:11:18.048994 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6g55" event={"ID":"b6d96b6b-ee60-4ea8-b885-8425805ca628","Type":"ContainerDied","Data":"b47243864a79929bc208023ea28762551c558f7302182fab30dbd57afb06eac1"} Feb 18 01:11:19 crc kubenswrapper[4858]: I0218 01:11:19.067544 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6g55" event={"ID":"b6d96b6b-ee60-4ea8-b885-8425805ca628","Type":"ContainerStarted","Data":"91327dc69bc91e3f2db81daf11ad8e5dc3a6040cc7ad5253ff0cec068bd4ddc8"} Feb 18 01:11:23 crc kubenswrapper[4858]: I0218 01:11:23.574523 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n6g55" Feb 18 01:11:23 crc kubenswrapper[4858]: I0218 01:11:23.575075 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n6g55" Feb 18 01:11:23 crc kubenswrapper[4858]: I0218 01:11:23.633168 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n6g55" Feb 18 01:11:23 crc kubenswrapper[4858]: I0218 01:11:23.656113 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n6g55" podStartSLOduration=7.196708542 podStartE2EDuration="10.656092792s" podCreationTimestamp="2026-02-18 01:11:13 +0000 UTC" firstStartedPulling="2026-02-18 01:11:15.00958291 +0000 UTC m=+2228.315419642" lastFinishedPulling="2026-02-18 01:11:18.46896716 +0000 UTC m=+2231.774803892" observedRunningTime="2026-02-18 01:11:19.105620151 +0000 UTC m=+2232.411456893" watchObservedRunningTime="2026-02-18 01:11:23.656092792 +0000 UTC m=+2236.961929534" Feb 18 01:11:24 crc kubenswrapper[4858]: I0218 01:11:24.192056 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n6g55" Feb 18 01:11:24 crc kubenswrapper[4858]: I0218 01:11:24.259877 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n6g55"] Feb 18 01:11:25 crc kubenswrapper[4858]: I0218 01:11:25.264944 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:11:25 crc kubenswrapper[4858]: I0218 01:11:25.265028 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:11:25 crc kubenswrapper[4858]: I0218 01:11:25.265094 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 01:11:25 crc kubenswrapper[4858]: I0218 01:11:25.266040 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc"} pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 01:11:25 crc kubenswrapper[4858]: I0218 01:11:25.266149 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" containerID="cri-o://45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" gracePeriod=600 Feb 18 01:11:25 crc kubenswrapper[4858]: E0218 01:11:25.397559 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:11:25 crc kubenswrapper[4858]: E0218 01:11:25.421668 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:11:26 crc kubenswrapper[4858]: I0218 01:11:26.144059 4858 generic.go:334] "Generic (PLEG): container finished" podID="7172df49-6116-4968-a2b5-a1afb116568b" containerID="45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" exitCode=0 Feb 18 01:11:26 crc kubenswrapper[4858]: I0218 01:11:26.144136 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerDied","Data":"45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc"} Feb 18 01:11:26 crc kubenswrapper[4858]: I0218 01:11:26.144206 4858 scope.go:117] "RemoveContainer" containerID="626db8794e6c6706ae5135270c36b181f2132b11647335d3f141e1f34af4e8c8" Feb 18 01:11:26 crc kubenswrapper[4858]: I0218 01:11:26.144351 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n6g55" podUID="b6d96b6b-ee60-4ea8-b885-8425805ca628" containerName="registry-server" containerID="cri-o://91327dc69bc91e3f2db81daf11ad8e5dc3a6040cc7ad5253ff0cec068bd4ddc8" gracePeriod=2 Feb 18 01:11:26 crc kubenswrapper[4858]: I0218 01:11:26.145419 4858 scope.go:117] "RemoveContainer" containerID="45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" Feb 18 01:11:26 crc kubenswrapper[4858]: E0218 01:11:26.145893 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:11:26 crc kubenswrapper[4858]: E0218 01:11:26.303353 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6d96b6b_ee60_4ea8_b885_8425805ca628.slice/crio-91327dc69bc91e3f2db81daf11ad8e5dc3a6040cc7ad5253ff0cec068bd4ddc8.scope\": RecentStats: unable to find data in memory cache]" Feb 18 01:11:26 crc kubenswrapper[4858]: I0218 01:11:26.732045 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n6g55" Feb 18 01:11:26 crc kubenswrapper[4858]: I0218 01:11:26.870833 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d96b6b-ee60-4ea8-b885-8425805ca628-catalog-content\") pod \"b6d96b6b-ee60-4ea8-b885-8425805ca628\" (UID: \"b6d96b6b-ee60-4ea8-b885-8425805ca628\") " Feb 18 01:11:26 crc kubenswrapper[4858]: I0218 01:11:26.870989 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwz5n\" (UniqueName: \"kubernetes.io/projected/b6d96b6b-ee60-4ea8-b885-8425805ca628-kube-api-access-bwz5n\") pod \"b6d96b6b-ee60-4ea8-b885-8425805ca628\" (UID: \"b6d96b6b-ee60-4ea8-b885-8425805ca628\") " Feb 18 01:11:26 crc kubenswrapper[4858]: I0218 01:11:26.871089 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d96b6b-ee60-4ea8-b885-8425805ca628-utilities\") pod \"b6d96b6b-ee60-4ea8-b885-8425805ca628\" (UID: \"b6d96b6b-ee60-4ea8-b885-8425805ca628\") " Feb 18 01:11:26 crc kubenswrapper[4858]: I0218 01:11:26.871936 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6d96b6b-ee60-4ea8-b885-8425805ca628-utilities" (OuterVolumeSpecName: "utilities") pod "b6d96b6b-ee60-4ea8-b885-8425805ca628" (UID: "b6d96b6b-ee60-4ea8-b885-8425805ca628"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:11:26 crc kubenswrapper[4858]: I0218 01:11:26.877793 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6d96b6b-ee60-4ea8-b885-8425805ca628-kube-api-access-bwz5n" (OuterVolumeSpecName: "kube-api-access-bwz5n") pod "b6d96b6b-ee60-4ea8-b885-8425805ca628" (UID: "b6d96b6b-ee60-4ea8-b885-8425805ca628"). InnerVolumeSpecName "kube-api-access-bwz5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:11:26 crc kubenswrapper[4858]: I0218 01:11:26.939775 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6d96b6b-ee60-4ea8-b885-8425805ca628-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6d96b6b-ee60-4ea8-b885-8425805ca628" (UID: "b6d96b6b-ee60-4ea8-b885-8425805ca628"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:11:26 crc kubenswrapper[4858]: I0218 01:11:26.973341 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d96b6b-ee60-4ea8-b885-8425805ca628-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:11:26 crc kubenswrapper[4858]: I0218 01:11:26.973555 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwz5n\" (UniqueName: \"kubernetes.io/projected/b6d96b6b-ee60-4ea8-b885-8425805ca628-kube-api-access-bwz5n\") on node \"crc\" DevicePath \"\"" Feb 18 01:11:26 crc kubenswrapper[4858]: I0218 01:11:26.973623 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d96b6b-ee60-4ea8-b885-8425805ca628-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:11:27 crc kubenswrapper[4858]: I0218 01:11:27.161604 4858 generic.go:334] "Generic (PLEG): container finished" podID="b6d96b6b-ee60-4ea8-b885-8425805ca628" containerID="91327dc69bc91e3f2db81daf11ad8e5dc3a6040cc7ad5253ff0cec068bd4ddc8" exitCode=0 Feb 18 01:11:27 crc kubenswrapper[4858]: I0218 01:11:27.161668 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6g55" event={"ID":"b6d96b6b-ee60-4ea8-b885-8425805ca628","Type":"ContainerDied","Data":"91327dc69bc91e3f2db81daf11ad8e5dc3a6040cc7ad5253ff0cec068bd4ddc8"} Feb 18 01:11:27 crc kubenswrapper[4858]: I0218 01:11:27.161702 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6g55" event={"ID":"b6d96b6b-ee60-4ea8-b885-8425805ca628","Type":"ContainerDied","Data":"9b426a8fcaa4aac28fd436e09fa4db225792021e9e5f5baeaa3ef8ed28afa404"} Feb 18 01:11:27 crc kubenswrapper[4858]: I0218 01:11:27.161735 4858 scope.go:117] "RemoveContainer" containerID="91327dc69bc91e3f2db81daf11ad8e5dc3a6040cc7ad5253ff0cec068bd4ddc8" Feb 18 01:11:27 crc kubenswrapper[4858]: I0218 01:11:27.161901 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n6g55" Feb 18 01:11:27 crc kubenswrapper[4858]: I0218 01:11:27.224208 4858 scope.go:117] "RemoveContainer" containerID="b47243864a79929bc208023ea28762551c558f7302182fab30dbd57afb06eac1" Feb 18 01:11:27 crc kubenswrapper[4858]: I0218 01:11:27.237336 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n6g55"] Feb 18 01:11:27 crc kubenswrapper[4858]: I0218 01:11:27.246311 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n6g55"] Feb 18 01:11:27 crc kubenswrapper[4858]: I0218 01:11:27.265007 4858 scope.go:117] "RemoveContainer" containerID="0038ab8821d7a29611e9416886f1c4f58b1eafce5dac869553943a3f2444dd21" Feb 18 01:11:27 crc kubenswrapper[4858]: I0218 01:11:27.312585 4858 scope.go:117] "RemoveContainer" containerID="91327dc69bc91e3f2db81daf11ad8e5dc3a6040cc7ad5253ff0cec068bd4ddc8" Feb 18 01:11:27 crc kubenswrapper[4858]: E0218 01:11:27.312905 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91327dc69bc91e3f2db81daf11ad8e5dc3a6040cc7ad5253ff0cec068bd4ddc8\": container with ID starting with 91327dc69bc91e3f2db81daf11ad8e5dc3a6040cc7ad5253ff0cec068bd4ddc8 not found: ID does not exist" containerID="91327dc69bc91e3f2db81daf11ad8e5dc3a6040cc7ad5253ff0cec068bd4ddc8" Feb 18 01:11:27 crc kubenswrapper[4858]: I0218 01:11:27.312946 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91327dc69bc91e3f2db81daf11ad8e5dc3a6040cc7ad5253ff0cec068bd4ddc8"} err="failed to get container status \"91327dc69bc91e3f2db81daf11ad8e5dc3a6040cc7ad5253ff0cec068bd4ddc8\": rpc error: code = NotFound desc = could not find container \"91327dc69bc91e3f2db81daf11ad8e5dc3a6040cc7ad5253ff0cec068bd4ddc8\": container with ID starting with 91327dc69bc91e3f2db81daf11ad8e5dc3a6040cc7ad5253ff0cec068bd4ddc8 not found: ID does not exist" Feb 18 01:11:27 crc kubenswrapper[4858]: I0218 01:11:27.312966 4858 scope.go:117] "RemoveContainer" containerID="b47243864a79929bc208023ea28762551c558f7302182fab30dbd57afb06eac1" Feb 18 01:11:27 crc kubenswrapper[4858]: E0218 01:11:27.313316 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b47243864a79929bc208023ea28762551c558f7302182fab30dbd57afb06eac1\": container with ID starting with b47243864a79929bc208023ea28762551c558f7302182fab30dbd57afb06eac1 not found: ID does not exist" containerID="b47243864a79929bc208023ea28762551c558f7302182fab30dbd57afb06eac1" Feb 18 01:11:27 crc kubenswrapper[4858]: I0218 01:11:27.313332 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b47243864a79929bc208023ea28762551c558f7302182fab30dbd57afb06eac1"} err="failed to get container status \"b47243864a79929bc208023ea28762551c558f7302182fab30dbd57afb06eac1\": rpc error: code = NotFound desc = could not find container \"b47243864a79929bc208023ea28762551c558f7302182fab30dbd57afb06eac1\": container with ID starting with b47243864a79929bc208023ea28762551c558f7302182fab30dbd57afb06eac1 not found: ID does not exist" Feb 18 01:11:27 crc kubenswrapper[4858]: I0218 01:11:27.313345 4858 scope.go:117] "RemoveContainer" containerID="0038ab8821d7a29611e9416886f1c4f58b1eafce5dac869553943a3f2444dd21" Feb 18 01:11:27 crc kubenswrapper[4858]: E0218 01:11:27.313619 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0038ab8821d7a29611e9416886f1c4f58b1eafce5dac869553943a3f2444dd21\": container with ID starting with 0038ab8821d7a29611e9416886f1c4f58b1eafce5dac869553943a3f2444dd21 not found: ID does not exist" containerID="0038ab8821d7a29611e9416886f1c4f58b1eafce5dac869553943a3f2444dd21" Feb 18 01:11:27 crc kubenswrapper[4858]: I0218 01:11:27.313636 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0038ab8821d7a29611e9416886f1c4f58b1eafce5dac869553943a3f2444dd21"} err="failed to get container status \"0038ab8821d7a29611e9416886f1c4f58b1eafce5dac869553943a3f2444dd21\": rpc error: code = NotFound desc = could not find container \"0038ab8821d7a29611e9416886f1c4f58b1eafce5dac869553943a3f2444dd21\": container with ID starting with 0038ab8821d7a29611e9416886f1c4f58b1eafce5dac869553943a3f2444dd21 not found: ID does not exist" Feb 18 01:11:27 crc kubenswrapper[4858]: I0218 01:11:27.433464 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6d96b6b-ee60-4ea8-b885-8425805ca628" path="/var/lib/kubelet/pods/b6d96b6b-ee60-4ea8-b885-8425805ca628/volumes" Feb 18 01:11:29 crc kubenswrapper[4858]: E0218 01:11:29.422838 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:11:40 crc kubenswrapper[4858]: E0218 01:11:40.424013 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:11:41 crc kubenswrapper[4858]: I0218 01:11:41.419608 4858 scope.go:117] "RemoveContainer" containerID="45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" Feb 18 01:11:41 crc kubenswrapper[4858]: E0218 01:11:41.420220 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:11:41 crc kubenswrapper[4858]: E0218 01:11:41.422072 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:11:52 crc kubenswrapper[4858]: E0218 01:11:52.422597 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:11:55 crc kubenswrapper[4858]: I0218 01:11:55.422694 4858 scope.go:117] "RemoveContainer" containerID="45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" Feb 18 01:11:55 crc kubenswrapper[4858]: E0218 01:11:55.432117 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:11:56 crc kubenswrapper[4858]: E0218 01:11:56.423507 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:12:03 crc kubenswrapper[4858]: E0218 01:12:03.424716 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:12:10 crc kubenswrapper[4858]: I0218 01:12:10.421211 4858 scope.go:117] "RemoveContainer" containerID="45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" Feb 18 01:12:10 crc kubenswrapper[4858]: E0218 01:12:10.422756 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:12:11 crc kubenswrapper[4858]: E0218 01:12:11.422600 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:12:14 crc kubenswrapper[4858]: E0218 01:12:14.423470 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:12:18 crc kubenswrapper[4858]: I0218 01:12:18.166271 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dg8mj"] Feb 18 01:12:18 crc kubenswrapper[4858]: E0218 01:12:18.167266 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6d96b6b-ee60-4ea8-b885-8425805ca628" containerName="extract-utilities" Feb 18 01:12:18 crc kubenswrapper[4858]: I0218 01:12:18.167283 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d96b6b-ee60-4ea8-b885-8425805ca628" containerName="extract-utilities" Feb 18 01:12:18 crc kubenswrapper[4858]: E0218 01:12:18.167299 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6d96b6b-ee60-4ea8-b885-8425805ca628" containerName="extract-content" Feb 18 01:12:18 crc kubenswrapper[4858]: I0218 01:12:18.167307 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d96b6b-ee60-4ea8-b885-8425805ca628" containerName="extract-content" Feb 18 01:12:18 crc kubenswrapper[4858]: E0218 01:12:18.167320 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6d96b6b-ee60-4ea8-b885-8425805ca628" containerName="registry-server" Feb 18 01:12:18 crc kubenswrapper[4858]: I0218 01:12:18.167327 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d96b6b-ee60-4ea8-b885-8425805ca628" containerName="registry-server" Feb 18 01:12:18 crc kubenswrapper[4858]: I0218 01:12:18.167567 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6d96b6b-ee60-4ea8-b885-8425805ca628" containerName="registry-server" Feb 18 01:12:18 crc kubenswrapper[4858]: I0218 01:12:18.169148 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dg8mj" Feb 18 01:12:18 crc kubenswrapper[4858]: I0218 01:12:18.186757 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dg8mj"] Feb 18 01:12:18 crc kubenswrapper[4858]: I0218 01:12:18.211083 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54f16324-5ef7-49a5-a328-10a754877b72-utilities\") pod \"redhat-marketplace-dg8mj\" (UID: \"54f16324-5ef7-49a5-a328-10a754877b72\") " pod="openshift-marketplace/redhat-marketplace-dg8mj" Feb 18 01:12:18 crc kubenswrapper[4858]: I0218 01:12:18.211163 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tllz\" (UniqueName: \"kubernetes.io/projected/54f16324-5ef7-49a5-a328-10a754877b72-kube-api-access-7tllz\") pod \"redhat-marketplace-dg8mj\" (UID: \"54f16324-5ef7-49a5-a328-10a754877b72\") " pod="openshift-marketplace/redhat-marketplace-dg8mj" Feb 18 01:12:18 crc kubenswrapper[4858]: I0218 01:12:18.211233 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54f16324-5ef7-49a5-a328-10a754877b72-catalog-content\") pod \"redhat-marketplace-dg8mj\" (UID: \"54f16324-5ef7-49a5-a328-10a754877b72\") " pod="openshift-marketplace/redhat-marketplace-dg8mj" Feb 18 01:12:18 crc kubenswrapper[4858]: I0218 01:12:18.313950 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54f16324-5ef7-49a5-a328-10a754877b72-utilities\") pod \"redhat-marketplace-dg8mj\" (UID: \"54f16324-5ef7-49a5-a328-10a754877b72\") " pod="openshift-marketplace/redhat-marketplace-dg8mj" Feb 18 01:12:18 crc kubenswrapper[4858]: I0218 01:12:18.314042 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tllz\" (UniqueName: \"kubernetes.io/projected/54f16324-5ef7-49a5-a328-10a754877b72-kube-api-access-7tllz\") pod \"redhat-marketplace-dg8mj\" (UID: \"54f16324-5ef7-49a5-a328-10a754877b72\") " pod="openshift-marketplace/redhat-marketplace-dg8mj" Feb 18 01:12:18 crc kubenswrapper[4858]: I0218 01:12:18.314092 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54f16324-5ef7-49a5-a328-10a754877b72-catalog-content\") pod \"redhat-marketplace-dg8mj\" (UID: \"54f16324-5ef7-49a5-a328-10a754877b72\") " pod="openshift-marketplace/redhat-marketplace-dg8mj" Feb 18 01:12:18 crc kubenswrapper[4858]: I0218 01:12:18.314744 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54f16324-5ef7-49a5-a328-10a754877b72-catalog-content\") pod \"redhat-marketplace-dg8mj\" (UID: \"54f16324-5ef7-49a5-a328-10a754877b72\") " pod="openshift-marketplace/redhat-marketplace-dg8mj" Feb 18 01:12:18 crc kubenswrapper[4858]: I0218 01:12:18.315544 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54f16324-5ef7-49a5-a328-10a754877b72-utilities\") pod \"redhat-marketplace-dg8mj\" (UID: \"54f16324-5ef7-49a5-a328-10a754877b72\") " pod="openshift-marketplace/redhat-marketplace-dg8mj" Feb 18 01:12:18 crc kubenswrapper[4858]: I0218 01:12:18.342426 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tllz\" (UniqueName: \"kubernetes.io/projected/54f16324-5ef7-49a5-a328-10a754877b72-kube-api-access-7tllz\") pod \"redhat-marketplace-dg8mj\" (UID: \"54f16324-5ef7-49a5-a328-10a754877b72\") " pod="openshift-marketplace/redhat-marketplace-dg8mj" Feb 18 01:12:18 crc kubenswrapper[4858]: I0218 01:12:18.505936 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dg8mj" Feb 18 01:12:19 crc kubenswrapper[4858]: W0218 01:12:18.999708 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54f16324_5ef7_49a5_a328_10a754877b72.slice/crio-2c8f8a2f272041eec121449e078b77d1bf20d05c886156726c49a27bd67d384d WatchSource:0}: Error finding container 2c8f8a2f272041eec121449e078b77d1bf20d05c886156726c49a27bd67d384d: Status 404 returned error can't find the container with id 2c8f8a2f272041eec121449e078b77d1bf20d05c886156726c49a27bd67d384d Feb 18 01:12:19 crc kubenswrapper[4858]: I0218 01:12:19.001145 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dg8mj"] Feb 18 01:12:19 crc kubenswrapper[4858]: I0218 01:12:19.802957 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg8mj" event={"ID":"54f16324-5ef7-49a5-a328-10a754877b72","Type":"ContainerDied","Data":"cacdc5e14b3f889f8cd8064c621c2ccd22f6b01b6a11ffc74fd7e9a696613104"} Feb 18 01:12:19 crc kubenswrapper[4858]: I0218 01:12:19.803052 4858 generic.go:334] "Generic (PLEG): container finished" podID="54f16324-5ef7-49a5-a328-10a754877b72" containerID="cacdc5e14b3f889f8cd8064c621c2ccd22f6b01b6a11ffc74fd7e9a696613104" exitCode=0 Feb 18 01:12:19 crc kubenswrapper[4858]: I0218 01:12:19.803334 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg8mj" event={"ID":"54f16324-5ef7-49a5-a328-10a754877b72","Type":"ContainerStarted","Data":"2c8f8a2f272041eec121449e078b77d1bf20d05c886156726c49a27bd67d384d"} Feb 18 01:12:19 crc kubenswrapper[4858]: I0218 01:12:19.806067 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 01:12:20 crc kubenswrapper[4858]: I0218 01:12:20.817827 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg8mj" event={"ID":"54f16324-5ef7-49a5-a328-10a754877b72","Type":"ContainerStarted","Data":"71fb512cc2cb7a5ff98a5195d864a360a41a40ffb45b81caa563057b4f347b68"} Feb 18 01:12:21 crc kubenswrapper[4858]: I0218 01:12:21.830101 4858 generic.go:334] "Generic (PLEG): container finished" podID="54f16324-5ef7-49a5-a328-10a754877b72" containerID="71fb512cc2cb7a5ff98a5195d864a360a41a40ffb45b81caa563057b4f347b68" exitCode=0 Feb 18 01:12:21 crc kubenswrapper[4858]: I0218 01:12:21.830160 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg8mj" event={"ID":"54f16324-5ef7-49a5-a328-10a754877b72","Type":"ContainerDied","Data":"71fb512cc2cb7a5ff98a5195d864a360a41a40ffb45b81caa563057b4f347b68"} Feb 18 01:12:22 crc kubenswrapper[4858]: I0218 01:12:22.845214 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg8mj" event={"ID":"54f16324-5ef7-49a5-a328-10a754877b72","Type":"ContainerStarted","Data":"a5f34767989a105a12df397cc3756ac3731501103270ff5e5dc6ea4b4a48127a"} Feb 18 01:12:22 crc kubenswrapper[4858]: I0218 01:12:22.873673 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dg8mj" podStartSLOduration=2.381382846 podStartE2EDuration="4.873649395s" podCreationTimestamp="2026-02-18 01:12:18 +0000 UTC" firstStartedPulling="2026-02-18 01:12:19.805699291 +0000 UTC m=+2293.111536063" lastFinishedPulling="2026-02-18 01:12:22.29796585 +0000 UTC m=+2295.603802612" observedRunningTime="2026-02-18 01:12:22.863410707 +0000 UTC m=+2296.169247459" watchObservedRunningTime="2026-02-18 01:12:22.873649395 +0000 UTC m=+2296.179486147" Feb 18 01:12:24 crc kubenswrapper[4858]: E0218 01:12:24.421671 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:12:25 crc kubenswrapper[4858]: I0218 01:12:25.420602 4858 scope.go:117] "RemoveContainer" containerID="45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" Feb 18 01:12:25 crc kubenswrapper[4858]: E0218 01:12:25.421088 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:12:28 crc kubenswrapper[4858]: I0218 01:12:28.506848 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dg8mj" Feb 18 01:12:28 crc kubenswrapper[4858]: I0218 01:12:28.507561 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dg8mj" Feb 18 01:12:28 crc kubenswrapper[4858]: I0218 01:12:28.571220 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dg8mj" Feb 18 01:12:28 crc kubenswrapper[4858]: I0218 01:12:28.960764 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dg8mj" Feb 18 01:12:29 crc kubenswrapper[4858]: I0218 01:12:29.042467 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dg8mj"] Feb 18 01:12:29 crc kubenswrapper[4858]: E0218 01:12:29.422386 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:12:30 crc kubenswrapper[4858]: I0218 01:12:30.927661 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dg8mj" podUID="54f16324-5ef7-49a5-a328-10a754877b72" containerName="registry-server" containerID="cri-o://a5f34767989a105a12df397cc3756ac3731501103270ff5e5dc6ea4b4a48127a" gracePeriod=2 Feb 18 01:12:31 crc kubenswrapper[4858]: E0218 01:12:31.177047 4858 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Feb 18 01:12:31 crc kubenswrapper[4858]: I0218 01:12:31.409412 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dg8mj" Feb 18 01:12:31 crc kubenswrapper[4858]: I0218 01:12:31.531175 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54f16324-5ef7-49a5-a328-10a754877b72-utilities\") pod \"54f16324-5ef7-49a5-a328-10a754877b72\" (UID: \"54f16324-5ef7-49a5-a328-10a754877b72\") " Feb 18 01:12:31 crc kubenswrapper[4858]: I0218 01:12:31.531284 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54f16324-5ef7-49a5-a328-10a754877b72-catalog-content\") pod \"54f16324-5ef7-49a5-a328-10a754877b72\" (UID: \"54f16324-5ef7-49a5-a328-10a754877b72\") " Feb 18 01:12:31 crc kubenswrapper[4858]: I0218 01:12:31.531640 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tllz\" (UniqueName: \"kubernetes.io/projected/54f16324-5ef7-49a5-a328-10a754877b72-kube-api-access-7tllz\") pod \"54f16324-5ef7-49a5-a328-10a754877b72\" (UID: \"54f16324-5ef7-49a5-a328-10a754877b72\") " Feb 18 01:12:31 crc kubenswrapper[4858]: I0218 01:12:31.532078 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54f16324-5ef7-49a5-a328-10a754877b72-utilities" (OuterVolumeSpecName: "utilities") pod "54f16324-5ef7-49a5-a328-10a754877b72" (UID: "54f16324-5ef7-49a5-a328-10a754877b72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:12:31 crc kubenswrapper[4858]: I0218 01:12:31.534143 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54f16324-5ef7-49a5-a328-10a754877b72-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:12:31 crc kubenswrapper[4858]: I0218 01:12:31.538657 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54f16324-5ef7-49a5-a328-10a754877b72-kube-api-access-7tllz" (OuterVolumeSpecName: "kube-api-access-7tllz") pod "54f16324-5ef7-49a5-a328-10a754877b72" (UID: "54f16324-5ef7-49a5-a328-10a754877b72"). InnerVolumeSpecName "kube-api-access-7tllz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:12:31 crc kubenswrapper[4858]: I0218 01:12:31.561711 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54f16324-5ef7-49a5-a328-10a754877b72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "54f16324-5ef7-49a5-a328-10a754877b72" (UID: "54f16324-5ef7-49a5-a328-10a754877b72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:12:31 crc kubenswrapper[4858]: I0218 01:12:31.636150 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54f16324-5ef7-49a5-a328-10a754877b72-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:12:31 crc kubenswrapper[4858]: I0218 01:12:31.636487 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tllz\" (UniqueName: \"kubernetes.io/projected/54f16324-5ef7-49a5-a328-10a754877b72-kube-api-access-7tllz\") on node \"crc\" DevicePath \"\"" Feb 18 01:12:31 crc kubenswrapper[4858]: I0218 01:12:31.941769 4858 generic.go:334] "Generic (PLEG): container finished" podID="54f16324-5ef7-49a5-a328-10a754877b72" containerID="a5f34767989a105a12df397cc3756ac3731501103270ff5e5dc6ea4b4a48127a" exitCode=0 Feb 18 01:12:31 crc kubenswrapper[4858]: I0218 01:12:31.941828 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg8mj" event={"ID":"54f16324-5ef7-49a5-a328-10a754877b72","Type":"ContainerDied","Data":"a5f34767989a105a12df397cc3756ac3731501103270ff5e5dc6ea4b4a48127a"} Feb 18 01:12:31 crc kubenswrapper[4858]: I0218 01:12:31.941864 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg8mj" event={"ID":"54f16324-5ef7-49a5-a328-10a754877b72","Type":"ContainerDied","Data":"2c8f8a2f272041eec121449e078b77d1bf20d05c886156726c49a27bd67d384d"} Feb 18 01:12:31 crc kubenswrapper[4858]: I0218 01:12:31.941884 4858 scope.go:117] "RemoveContainer" containerID="a5f34767989a105a12df397cc3756ac3731501103270ff5e5dc6ea4b4a48127a" Feb 18 01:12:31 crc kubenswrapper[4858]: I0218 01:12:31.941904 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dg8mj" Feb 18 01:12:31 crc kubenswrapper[4858]: I0218 01:12:31.984604 4858 scope.go:117] "RemoveContainer" containerID="71fb512cc2cb7a5ff98a5195d864a360a41a40ffb45b81caa563057b4f347b68" Feb 18 01:12:31 crc kubenswrapper[4858]: I0218 01:12:31.988489 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dg8mj"] Feb 18 01:12:32 crc kubenswrapper[4858]: I0218 01:12:32.001416 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dg8mj"] Feb 18 01:12:32 crc kubenswrapper[4858]: I0218 01:12:32.013975 4858 scope.go:117] "RemoveContainer" containerID="cacdc5e14b3f889f8cd8064c621c2ccd22f6b01b6a11ffc74fd7e9a696613104" Feb 18 01:12:32 crc kubenswrapper[4858]: I0218 01:12:32.073763 4858 scope.go:117] "RemoveContainer" containerID="a5f34767989a105a12df397cc3756ac3731501103270ff5e5dc6ea4b4a48127a" Feb 18 01:12:32 crc kubenswrapper[4858]: E0218 01:12:32.074226 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5f34767989a105a12df397cc3756ac3731501103270ff5e5dc6ea4b4a48127a\": container with ID starting with a5f34767989a105a12df397cc3756ac3731501103270ff5e5dc6ea4b4a48127a not found: ID does not exist" containerID="a5f34767989a105a12df397cc3756ac3731501103270ff5e5dc6ea4b4a48127a" Feb 18 01:12:32 crc kubenswrapper[4858]: I0218 01:12:32.074263 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5f34767989a105a12df397cc3756ac3731501103270ff5e5dc6ea4b4a48127a"} err="failed to get container status \"a5f34767989a105a12df397cc3756ac3731501103270ff5e5dc6ea4b4a48127a\": rpc error: code = NotFound desc = could not find container \"a5f34767989a105a12df397cc3756ac3731501103270ff5e5dc6ea4b4a48127a\": container with ID starting with a5f34767989a105a12df397cc3756ac3731501103270ff5e5dc6ea4b4a48127a not found: ID does not exist" Feb 18 01:12:32 crc kubenswrapper[4858]: I0218 01:12:32.074283 4858 scope.go:117] "RemoveContainer" containerID="71fb512cc2cb7a5ff98a5195d864a360a41a40ffb45b81caa563057b4f347b68" Feb 18 01:12:32 crc kubenswrapper[4858]: E0218 01:12:32.074536 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71fb512cc2cb7a5ff98a5195d864a360a41a40ffb45b81caa563057b4f347b68\": container with ID starting with 71fb512cc2cb7a5ff98a5195d864a360a41a40ffb45b81caa563057b4f347b68 not found: ID does not exist" containerID="71fb512cc2cb7a5ff98a5195d864a360a41a40ffb45b81caa563057b4f347b68" Feb 18 01:12:32 crc kubenswrapper[4858]: I0218 01:12:32.074559 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71fb512cc2cb7a5ff98a5195d864a360a41a40ffb45b81caa563057b4f347b68"} err="failed to get container status \"71fb512cc2cb7a5ff98a5195d864a360a41a40ffb45b81caa563057b4f347b68\": rpc error: code = NotFound desc = could not find container \"71fb512cc2cb7a5ff98a5195d864a360a41a40ffb45b81caa563057b4f347b68\": container with ID starting with 71fb512cc2cb7a5ff98a5195d864a360a41a40ffb45b81caa563057b4f347b68 not found: ID does not exist" Feb 18 01:12:32 crc kubenswrapper[4858]: I0218 01:12:32.074573 4858 scope.go:117] "RemoveContainer" containerID="cacdc5e14b3f889f8cd8064c621c2ccd22f6b01b6a11ffc74fd7e9a696613104" Feb 18 01:12:32 crc kubenswrapper[4858]: E0218 01:12:32.074792 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cacdc5e14b3f889f8cd8064c621c2ccd22f6b01b6a11ffc74fd7e9a696613104\": container with ID starting with cacdc5e14b3f889f8cd8064c621c2ccd22f6b01b6a11ffc74fd7e9a696613104 not found: ID does not exist" containerID="cacdc5e14b3f889f8cd8064c621c2ccd22f6b01b6a11ffc74fd7e9a696613104" Feb 18 01:12:32 crc kubenswrapper[4858]: I0218 01:12:32.074819 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cacdc5e14b3f889f8cd8064c621c2ccd22f6b01b6a11ffc74fd7e9a696613104"} err="failed to get container status \"cacdc5e14b3f889f8cd8064c621c2ccd22f6b01b6a11ffc74fd7e9a696613104\": rpc error: code = NotFound desc = could not find container \"cacdc5e14b3f889f8cd8064c621c2ccd22f6b01b6a11ffc74fd7e9a696613104\": container with ID starting with cacdc5e14b3f889f8cd8064c621c2ccd22f6b01b6a11ffc74fd7e9a696613104 not found: ID does not exist" Feb 18 01:12:33 crc kubenswrapper[4858]: I0218 01:12:33.449099 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54f16324-5ef7-49a5-a328-10a754877b72" path="/var/lib/kubelet/pods/54f16324-5ef7-49a5-a328-10a754877b72/volumes" Feb 18 01:12:37 crc kubenswrapper[4858]: I0218 01:12:37.426357 4858 scope.go:117] "RemoveContainer" containerID="45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" Feb 18 01:12:37 crc kubenswrapper[4858]: E0218 01:12:37.427238 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:12:37 crc kubenswrapper[4858]: E0218 01:12:37.428218 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:12:43 crc kubenswrapper[4858]: E0218 01:12:43.422270 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:12:49 crc kubenswrapper[4858]: E0218 01:12:49.550550 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 01:12:49 crc kubenswrapper[4858]: E0218 01:12:49.551187 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 01:12:49 crc kubenswrapper[4858]: E0218 01:12:49.551348 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t22t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-h2mps_openstack(8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:12:49 crc kubenswrapper[4858]: E0218 01:12:49.552551 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:12:51 crc kubenswrapper[4858]: I0218 01:12:51.419618 4858 scope.go:117] "RemoveContainer" containerID="45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" Feb 18 01:12:51 crc kubenswrapper[4858]: E0218 01:12:51.420410 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:12:56 crc kubenswrapper[4858]: E0218 01:12:56.422781 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:13:02 crc kubenswrapper[4858]: E0218 01:13:02.423162 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:13:04 crc kubenswrapper[4858]: I0218 01:13:04.420465 4858 scope.go:117] "RemoveContainer" containerID="45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" Feb 18 01:13:04 crc kubenswrapper[4858]: E0218 01:13:04.421420 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:13:10 crc kubenswrapper[4858]: E0218 01:13:10.553365 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:13:10 crc kubenswrapper[4858]: E0218 01:13:10.554075 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:13:10 crc kubenswrapper[4858]: E0218 01:13:10.554288 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n75h65dh56ch8chbh597h687h696h57bhdchcbhb6h9h648hb6h686h666h57ch557h55ch68ch76h686h5f7hb7hc6h68hdh67fh8bhd7hbcq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6qb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(1b28954c-8d35-4f43-a44b-307a56f6fff5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:13:10 crc kubenswrapper[4858]: E0218 01:13:10.555567 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:13:15 crc kubenswrapper[4858]: E0218 01:13:15.422910 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:13:18 crc kubenswrapper[4858]: I0218 01:13:18.419925 4858 scope.go:117] "RemoveContainer" containerID="45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" Feb 18 01:13:18 crc kubenswrapper[4858]: E0218 01:13:18.420801 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:13:23 crc kubenswrapper[4858]: E0218 01:13:23.424471 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:13:29 crc kubenswrapper[4858]: E0218 01:13:29.421884 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:13:30 crc kubenswrapper[4858]: I0218 01:13:30.419524 4858 scope.go:117] "RemoveContainer" containerID="45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" Feb 18 01:13:30 crc kubenswrapper[4858]: E0218 01:13:30.419964 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:13:36 crc kubenswrapper[4858]: E0218 01:13:36.421982 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:13:42 crc kubenswrapper[4858]: E0218 01:13:42.423118 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:13:44 crc kubenswrapper[4858]: I0218 01:13:44.420912 4858 scope.go:117] "RemoveContainer" containerID="45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" Feb 18 01:13:44 crc kubenswrapper[4858]: E0218 01:13:44.421453 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:13:47 crc kubenswrapper[4858]: E0218 01:13:47.426915 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:13:55 crc kubenswrapper[4858]: E0218 01:13:55.423998 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:13:58 crc kubenswrapper[4858]: I0218 01:13:58.420932 4858 scope.go:117] "RemoveContainer" containerID="45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" Feb 18 01:13:58 crc kubenswrapper[4858]: E0218 01:13:58.421905 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:14:00 crc kubenswrapper[4858]: E0218 01:14:00.423838 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:14:10 crc kubenswrapper[4858]: I0218 01:14:10.420091 4858 scope.go:117] "RemoveContainer" containerID="45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" Feb 18 01:14:10 crc kubenswrapper[4858]: E0218 01:14:10.420822 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:14:10 crc kubenswrapper[4858]: E0218 01:14:10.422107 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:14:11 crc kubenswrapper[4858]: E0218 01:14:11.422358 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:14:21 crc kubenswrapper[4858]: I0218 01:14:21.419375 4858 scope.go:117] "RemoveContainer" containerID="45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" Feb 18 01:14:21 crc kubenswrapper[4858]: E0218 01:14:21.420468 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:14:22 crc kubenswrapper[4858]: E0218 01:14:22.422809 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:14:25 crc kubenswrapper[4858]: E0218 01:14:25.424037 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:14:34 crc kubenswrapper[4858]: E0218 01:14:34.422867 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:14:36 crc kubenswrapper[4858]: I0218 01:14:36.420042 4858 scope.go:117] "RemoveContainer" containerID="45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" Feb 18 01:14:36 crc kubenswrapper[4858]: E0218 01:14:36.420885 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:14:38 crc kubenswrapper[4858]: E0218 01:14:38.422939 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:14:44 crc kubenswrapper[4858]: I0218 01:14:44.982484 4858 generic.go:334] "Generic (PLEG): container finished" podID="84f1880d-a959-4d42-85c2-bf04e0268fda" containerID="20de882475476f5d70ad1502b446259c784acf88e74bf52d0164b1a94445bdc6" exitCode=2 Feb 18 01:14:44 crc kubenswrapper[4858]: I0218 01:14:44.982531 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp" event={"ID":"84f1880d-a959-4d42-85c2-bf04e0268fda","Type":"ContainerDied","Data":"20de882475476f5d70ad1502b446259c784acf88e74bf52d0164b1a94445bdc6"} Feb 18 01:14:46 crc kubenswrapper[4858]: I0218 01:14:46.466575 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp" Feb 18 01:14:46 crc kubenswrapper[4858]: I0218 01:14:46.629577 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84f1880d-a959-4d42-85c2-bf04e0268fda-ssh-key-openstack-edpm-ipam\") pod \"84f1880d-a959-4d42-85c2-bf04e0268fda\" (UID: \"84f1880d-a959-4d42-85c2-bf04e0268fda\") " Feb 18 01:14:46 crc kubenswrapper[4858]: I0218 01:14:46.629981 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84f1880d-a959-4d42-85c2-bf04e0268fda-inventory\") pod \"84f1880d-a959-4d42-85c2-bf04e0268fda\" (UID: \"84f1880d-a959-4d42-85c2-bf04e0268fda\") " Feb 18 01:14:46 crc kubenswrapper[4858]: I0218 01:14:46.630012 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd2kn\" (UniqueName: \"kubernetes.io/projected/84f1880d-a959-4d42-85c2-bf04e0268fda-kube-api-access-rd2kn\") pod \"84f1880d-a959-4d42-85c2-bf04e0268fda\" (UID: \"84f1880d-a959-4d42-85c2-bf04e0268fda\") " Feb 18 01:14:46 crc kubenswrapper[4858]: I0218 01:14:46.634857 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84f1880d-a959-4d42-85c2-bf04e0268fda-kube-api-access-rd2kn" (OuterVolumeSpecName: "kube-api-access-rd2kn") pod "84f1880d-a959-4d42-85c2-bf04e0268fda" (UID: "84f1880d-a959-4d42-85c2-bf04e0268fda"). InnerVolumeSpecName "kube-api-access-rd2kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:14:46 crc kubenswrapper[4858]: I0218 01:14:46.661983 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84f1880d-a959-4d42-85c2-bf04e0268fda-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "84f1880d-a959-4d42-85c2-bf04e0268fda" (UID: "84f1880d-a959-4d42-85c2-bf04e0268fda"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:14:46 crc kubenswrapper[4858]: I0218 01:14:46.663957 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84f1880d-a959-4d42-85c2-bf04e0268fda-inventory" (OuterVolumeSpecName: "inventory") pod "84f1880d-a959-4d42-85c2-bf04e0268fda" (UID: "84f1880d-a959-4d42-85c2-bf04e0268fda"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:14:46 crc kubenswrapper[4858]: I0218 01:14:46.733010 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84f1880d-a959-4d42-85c2-bf04e0268fda-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 01:14:46 crc kubenswrapper[4858]: I0218 01:14:46.733049 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd2kn\" (UniqueName: \"kubernetes.io/projected/84f1880d-a959-4d42-85c2-bf04e0268fda-kube-api-access-rd2kn\") on node \"crc\" DevicePath \"\"" Feb 18 01:14:46 crc kubenswrapper[4858]: I0218 01:14:46.733064 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84f1880d-a959-4d42-85c2-bf04e0268fda-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 01:14:47 crc kubenswrapper[4858]: I0218 01:14:47.006061 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp" event={"ID":"84f1880d-a959-4d42-85c2-bf04e0268fda","Type":"ContainerDied","Data":"0a6161e4ab5e0f146256500847f5c1d11cf4d0bba67e3a87c12606e0504e97bd"} Feb 18 01:14:47 crc kubenswrapper[4858]: I0218 01:14:47.006102 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a6161e4ab5e0f146256500847f5c1d11cf4d0bba67e3a87c12606e0504e97bd" Feb 18 01:14:47 crc kubenswrapper[4858]: I0218 01:14:47.006145 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp" Feb 18 01:14:49 crc kubenswrapper[4858]: I0218 01:14:49.420429 4858 scope.go:117] "RemoveContainer" containerID="45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" Feb 18 01:14:49 crc kubenswrapper[4858]: E0218 01:14:49.421130 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:14:49 crc kubenswrapper[4858]: E0218 01:14:49.423289 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:14:53 crc kubenswrapper[4858]: E0218 01:14:53.422030 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:15:00 crc kubenswrapper[4858]: I0218 01:15:00.164031 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522955-mxg7m"] Feb 18 01:15:00 crc kubenswrapper[4858]: E0218 01:15:00.165284 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54f16324-5ef7-49a5-a328-10a754877b72" containerName="extract-utilities" Feb 18 01:15:00 crc kubenswrapper[4858]: I0218 01:15:00.165311 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="54f16324-5ef7-49a5-a328-10a754877b72" containerName="extract-utilities" Feb 18 01:15:00 crc kubenswrapper[4858]: E0218 01:15:00.165338 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54f16324-5ef7-49a5-a328-10a754877b72" containerName="extract-content" Feb 18 01:15:00 crc kubenswrapper[4858]: I0218 01:15:00.165349 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="54f16324-5ef7-49a5-a328-10a754877b72" containerName="extract-content" Feb 18 01:15:00 crc kubenswrapper[4858]: E0218 01:15:00.165365 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84f1880d-a959-4d42-85c2-bf04e0268fda" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 01:15:00 crc kubenswrapper[4858]: I0218 01:15:00.165393 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="84f1880d-a959-4d42-85c2-bf04e0268fda" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 01:15:00 crc kubenswrapper[4858]: E0218 01:15:00.165447 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54f16324-5ef7-49a5-a328-10a754877b72" containerName="registry-server" Feb 18 01:15:00 crc kubenswrapper[4858]: I0218 01:15:00.165460 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="54f16324-5ef7-49a5-a328-10a754877b72" containerName="registry-server" Feb 18 01:15:00 crc kubenswrapper[4858]: I0218 01:15:00.165800 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="84f1880d-a959-4d42-85c2-bf04e0268fda" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 01:15:00 crc kubenswrapper[4858]: I0218 01:15:00.165882 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="54f16324-5ef7-49a5-a328-10a754877b72" containerName="registry-server" Feb 18 01:15:00 crc kubenswrapper[4858]: I0218 01:15:00.166999 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-mxg7m" Feb 18 01:15:00 crc kubenswrapper[4858]: I0218 01:15:00.170283 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 01:15:00 crc kubenswrapper[4858]: I0218 01:15:00.170525 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 01:15:00 crc kubenswrapper[4858]: I0218 01:15:00.181830 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522955-mxg7m"] Feb 18 01:15:00 crc kubenswrapper[4858]: I0218 01:15:00.318379 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc-secret-volume\") pod \"collect-profiles-29522955-mxg7m\" (UID: \"2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-mxg7m" Feb 18 01:15:00 crc kubenswrapper[4858]: I0218 01:15:00.318465 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc-config-volume\") pod \"collect-profiles-29522955-mxg7m\" (UID: \"2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-mxg7m" Feb 18 01:15:00 crc kubenswrapper[4858]: I0218 01:15:00.318525 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86xn9\" (UniqueName: \"kubernetes.io/projected/2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc-kube-api-access-86xn9\") pod \"collect-profiles-29522955-mxg7m\" (UID: \"2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-mxg7m" Feb 18 01:15:00 crc kubenswrapper[4858]: I0218 01:15:00.420766 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc-secret-volume\") pod \"collect-profiles-29522955-mxg7m\" (UID: \"2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-mxg7m" Feb 18 01:15:00 crc kubenswrapper[4858]: I0218 01:15:00.421077 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc-config-volume\") pod \"collect-profiles-29522955-mxg7m\" (UID: \"2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-mxg7m" Feb 18 01:15:00 crc kubenswrapper[4858]: I0218 01:15:00.421117 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86xn9\" (UniqueName: \"kubernetes.io/projected/2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc-kube-api-access-86xn9\") pod \"collect-profiles-29522955-mxg7m\" (UID: \"2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-mxg7m" Feb 18 01:15:00 crc kubenswrapper[4858]: I0218 01:15:00.423212 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc-config-volume\") pod \"collect-profiles-29522955-mxg7m\" (UID: \"2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-mxg7m" Feb 18 01:15:00 crc kubenswrapper[4858]: I0218 01:15:00.434068 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc-secret-volume\") pod \"collect-profiles-29522955-mxg7m\" (UID: \"2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-mxg7m" Feb 18 01:15:00 crc kubenswrapper[4858]: I0218 01:15:00.439163 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86xn9\" (UniqueName: \"kubernetes.io/projected/2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc-kube-api-access-86xn9\") pod \"collect-profiles-29522955-mxg7m\" (UID: \"2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-mxg7m" Feb 18 01:15:00 crc kubenswrapper[4858]: I0218 01:15:00.510562 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-mxg7m" Feb 18 01:15:01 crc kubenswrapper[4858]: I0218 01:15:01.039094 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522955-mxg7m"] Feb 18 01:15:01 crc kubenswrapper[4858]: I0218 01:15:01.163032 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-mxg7m" event={"ID":"2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc","Type":"ContainerStarted","Data":"db0e6626c2af8fb9ea69b2debd88ad0329603e05b1be93c383cab96f5aacdb7d"} Feb 18 01:15:02 crc kubenswrapper[4858]: I0218 01:15:02.176742 4858 generic.go:334] "Generic (PLEG): container finished" podID="2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc" containerID="25b3c2ae65b6b9459c29a744c97bd8150f4b2e6807ef8b2fba493f1c1e322e6f" exitCode=0 Feb 18 01:15:02 crc kubenswrapper[4858]: I0218 01:15:02.177195 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-mxg7m" event={"ID":"2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc","Type":"ContainerDied","Data":"25b3c2ae65b6b9459c29a744c97bd8150f4b2e6807ef8b2fba493f1c1e322e6f"} Feb 18 01:15:03 crc kubenswrapper[4858]: I0218 01:15:03.675164 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-mxg7m" Feb 18 01:15:03 crc kubenswrapper[4858]: I0218 01:15:03.799025 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86xn9\" (UniqueName: \"kubernetes.io/projected/2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc-kube-api-access-86xn9\") pod \"2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc\" (UID: \"2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc\") " Feb 18 01:15:03 crc kubenswrapper[4858]: I0218 01:15:03.799205 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc-secret-volume\") pod \"2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc\" (UID: \"2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc\") " Feb 18 01:15:03 crc kubenswrapper[4858]: I0218 01:15:03.799330 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc-config-volume\") pod \"2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc\" (UID: \"2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc\") " Feb 18 01:15:03 crc kubenswrapper[4858]: I0218 01:15:03.800452 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc-config-volume" (OuterVolumeSpecName: "config-volume") pod "2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc" (UID: "2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 01:15:03 crc kubenswrapper[4858]: I0218 01:15:03.806380 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc-kube-api-access-86xn9" (OuterVolumeSpecName: "kube-api-access-86xn9") pod "2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc" (UID: "2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc"). InnerVolumeSpecName "kube-api-access-86xn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:15:03 crc kubenswrapper[4858]: I0218 01:15:03.808837 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc" (UID: "2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.032778 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj"] Feb 18 01:15:04 crc kubenswrapper[4858]: E0218 01:15:04.033176 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc" containerName="collect-profiles" Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.033194 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc" containerName="collect-profiles" Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.033368 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc" containerName="collect-profiles" Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.034076 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj" Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.038310 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.038445 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ts27" Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.038629 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.039053 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.049512 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj"] Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.125001 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.125040 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86xn9\" (UniqueName: \"kubernetes.io/projected/2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc-kube-api-access-86xn9\") on node \"crc\" DevicePath \"\"" Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.125052 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.199297 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-mxg7m" event={"ID":"2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc","Type":"ContainerDied","Data":"db0e6626c2af8fb9ea69b2debd88ad0329603e05b1be93c383cab96f5aacdb7d"} Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.199340 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db0e6626c2af8fb9ea69b2debd88ad0329603e05b1be93c383cab96f5aacdb7d" Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.199400 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-mxg7m" Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.227300 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13898d25-206e-4010-9f2f-54546c48aee6-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj\" (UID: \"13898d25-206e-4010-9f2f-54546c48aee6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj" Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.227351 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r25wf\" (UniqueName: \"kubernetes.io/projected/13898d25-206e-4010-9f2f-54546c48aee6-kube-api-access-r25wf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj\" (UID: \"13898d25-206e-4010-9f2f-54546c48aee6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj" Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.227441 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13898d25-206e-4010-9f2f-54546c48aee6-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj\" (UID: \"13898d25-206e-4010-9f2f-54546c48aee6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj" Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.330309 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13898d25-206e-4010-9f2f-54546c48aee6-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj\" (UID: \"13898d25-206e-4010-9f2f-54546c48aee6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj" Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.330391 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r25wf\" (UniqueName: \"kubernetes.io/projected/13898d25-206e-4010-9f2f-54546c48aee6-kube-api-access-r25wf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj\" (UID: \"13898d25-206e-4010-9f2f-54546c48aee6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj" Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.330595 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13898d25-206e-4010-9f2f-54546c48aee6-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj\" (UID: \"13898d25-206e-4010-9f2f-54546c48aee6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj" Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.336459 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13898d25-206e-4010-9f2f-54546c48aee6-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj\" (UID: \"13898d25-206e-4010-9f2f-54546c48aee6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj" Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.338550 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13898d25-206e-4010-9f2f-54546c48aee6-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj\" (UID: \"13898d25-206e-4010-9f2f-54546c48aee6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj" Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.358670 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r25wf\" (UniqueName: \"kubernetes.io/projected/13898d25-206e-4010-9f2f-54546c48aee6-kube-api-access-r25wf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj\" (UID: \"13898d25-206e-4010-9f2f-54546c48aee6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj" Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.416946 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj" Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.418885 4858 scope.go:117] "RemoveContainer" containerID="45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" Feb 18 01:15:04 crc kubenswrapper[4858]: E0218 01:15:04.419379 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:15:04 crc kubenswrapper[4858]: E0218 01:15:04.420587 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.793261 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522910-dtdsw"] Feb 18 01:15:04 crc kubenswrapper[4858]: I0218 01:15:04.806026 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522910-dtdsw"] Feb 18 01:15:05 crc kubenswrapper[4858]: W0218 01:15:05.025722 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13898d25_206e_4010_9f2f_54546c48aee6.slice/crio-03094c0c7b972abeb330ec70fc4f07936c75ccd7dfdda8b8d52e8197de6684bf WatchSource:0}: Error finding container 03094c0c7b972abeb330ec70fc4f07936c75ccd7dfdda8b8d52e8197de6684bf: Status 404 returned error can't find the container with id 03094c0c7b972abeb330ec70fc4f07936c75ccd7dfdda8b8d52e8197de6684bf Feb 18 01:15:05 crc kubenswrapper[4858]: I0218 01:15:05.036914 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj"] Feb 18 01:15:05 crc kubenswrapper[4858]: I0218 01:15:05.210353 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj" event={"ID":"13898d25-206e-4010-9f2f-54546c48aee6","Type":"ContainerStarted","Data":"03094c0c7b972abeb330ec70fc4f07936c75ccd7dfdda8b8d52e8197de6684bf"} Feb 18 01:15:05 crc kubenswrapper[4858]: I0218 01:15:05.441292 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5bd9f27-973a-4ec3-91b8-87c2c20c6c34" path="/var/lib/kubelet/pods/a5bd9f27-973a-4ec3-91b8-87c2c20c6c34/volumes" Feb 18 01:15:06 crc kubenswrapper[4858]: I0218 01:15:06.219350 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj" event={"ID":"13898d25-206e-4010-9f2f-54546c48aee6","Type":"ContainerStarted","Data":"452ba6df3f0a1e6cfe64aaa272d9cf694fe7f906a4b06111915e6ffe2826939e"} Feb 18 01:15:06 crc kubenswrapper[4858]: I0218 01:15:06.256427 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj" podStartSLOduration=1.7397341 podStartE2EDuration="2.256406662s" podCreationTimestamp="2026-02-18 01:15:04 +0000 UTC" firstStartedPulling="2026-02-18 01:15:05.028064998 +0000 UTC m=+2458.333901730" lastFinishedPulling="2026-02-18 01:15:05.54473756 +0000 UTC m=+2458.850574292" observedRunningTime="2026-02-18 01:15:06.252158851 +0000 UTC m=+2459.557995583" watchObservedRunningTime="2026-02-18 01:15:06.256406662 +0000 UTC m=+2459.562243394" Feb 18 01:15:07 crc kubenswrapper[4858]: E0218 01:15:07.431761 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:15:15 crc kubenswrapper[4858]: E0218 01:15:15.421726 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:15:17 crc kubenswrapper[4858]: I0218 01:15:17.426367 4858 scope.go:117] "RemoveContainer" containerID="45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" Feb 18 01:15:17 crc kubenswrapper[4858]: E0218 01:15:17.427543 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:15:18 crc kubenswrapper[4858]: E0218 01:15:18.422230 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:15:28 crc kubenswrapper[4858]: I0218 01:15:28.419179 4858 scope.go:117] "RemoveContainer" containerID="45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" Feb 18 01:15:28 crc kubenswrapper[4858]: E0218 01:15:28.420003 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:15:29 crc kubenswrapper[4858]: E0218 01:15:29.425102 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:15:30 crc kubenswrapper[4858]: E0218 01:15:30.421303 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:15:31 crc kubenswrapper[4858]: I0218 01:15:31.485424 4858 scope.go:117] "RemoveContainer" containerID="0108ee457bde4c4b8d70b9b80e5bfd9393784b012bca0065d0eb4c799f9404e3" Feb 18 01:15:41 crc kubenswrapper[4858]: I0218 01:15:41.420153 4858 scope.go:117] "RemoveContainer" containerID="45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" Feb 18 01:15:41 crc kubenswrapper[4858]: E0218 01:15:41.421296 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:15:43 crc kubenswrapper[4858]: E0218 01:15:43.423353 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:15:44 crc kubenswrapper[4858]: E0218 01:15:44.422111 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:15:54 crc kubenswrapper[4858]: I0218 01:15:54.420164 4858 scope.go:117] "RemoveContainer" containerID="45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" Feb 18 01:15:54 crc kubenswrapper[4858]: E0218 01:15:54.421181 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:15:58 crc kubenswrapper[4858]: E0218 01:15:58.422461 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:15:58 crc kubenswrapper[4858]: E0218 01:15:58.423037 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:16:07 crc kubenswrapper[4858]: I0218 01:16:07.429750 4858 scope.go:117] "RemoveContainer" containerID="45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" Feb 18 01:16:07 crc kubenswrapper[4858]: E0218 01:16:07.430672 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:16:10 crc kubenswrapper[4858]: E0218 01:16:10.429551 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:16:12 crc kubenswrapper[4858]: E0218 01:16:12.423301 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:16:21 crc kubenswrapper[4858]: I0218 01:16:21.420620 4858 scope.go:117] "RemoveContainer" containerID="45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" Feb 18 01:16:21 crc kubenswrapper[4858]: E0218 01:16:21.422021 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:16:24 crc kubenswrapper[4858]: E0218 01:16:24.422773 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:16:24 crc kubenswrapper[4858]: E0218 01:16:24.422820 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:16:35 crc kubenswrapper[4858]: I0218 01:16:35.419948 4858 scope.go:117] "RemoveContainer" containerID="45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" Feb 18 01:16:36 crc kubenswrapper[4858]: I0218 01:16:36.224915 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerStarted","Data":"313584f84c2f9b206f79e2d3fe4418ef5be7b70b242870da3851fa358bfd4f44"} Feb 18 01:16:38 crc kubenswrapper[4858]: E0218 01:16:38.427675 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:16:39 crc kubenswrapper[4858]: E0218 01:16:39.422371 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:16:51 crc kubenswrapper[4858]: E0218 01:16:51.422716 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:16:52 crc kubenswrapper[4858]: E0218 01:16:52.423818 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:16:55 crc kubenswrapper[4858]: I0218 01:16:55.035229 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l4mhr"] Feb 18 01:16:55 crc kubenswrapper[4858]: I0218 01:16:55.039916 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4mhr" Feb 18 01:16:55 crc kubenswrapper[4858]: I0218 01:16:55.052906 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l4mhr"] Feb 18 01:16:55 crc kubenswrapper[4858]: I0218 01:16:55.136597 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25b4m\" (UniqueName: \"kubernetes.io/projected/d3478f2f-8f82-4111-8d1c-eafa25c76f56-kube-api-access-25b4m\") pod \"redhat-operators-l4mhr\" (UID: \"d3478f2f-8f82-4111-8d1c-eafa25c76f56\") " pod="openshift-marketplace/redhat-operators-l4mhr" Feb 18 01:16:55 crc kubenswrapper[4858]: I0218 01:16:55.136656 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3478f2f-8f82-4111-8d1c-eafa25c76f56-utilities\") pod \"redhat-operators-l4mhr\" (UID: \"d3478f2f-8f82-4111-8d1c-eafa25c76f56\") " pod="openshift-marketplace/redhat-operators-l4mhr" Feb 18 01:16:55 crc kubenswrapper[4858]: I0218 01:16:55.136839 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3478f2f-8f82-4111-8d1c-eafa25c76f56-catalog-content\") pod \"redhat-operators-l4mhr\" (UID: \"d3478f2f-8f82-4111-8d1c-eafa25c76f56\") " pod="openshift-marketplace/redhat-operators-l4mhr" Feb 18 01:16:55 crc kubenswrapper[4858]: I0218 01:16:55.239113 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3478f2f-8f82-4111-8d1c-eafa25c76f56-utilities\") pod \"redhat-operators-l4mhr\" (UID: \"d3478f2f-8f82-4111-8d1c-eafa25c76f56\") " pod="openshift-marketplace/redhat-operators-l4mhr" Feb 18 01:16:55 crc kubenswrapper[4858]: I0218 01:16:55.239165 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25b4m\" (UniqueName: \"kubernetes.io/projected/d3478f2f-8f82-4111-8d1c-eafa25c76f56-kube-api-access-25b4m\") pod \"redhat-operators-l4mhr\" (UID: \"d3478f2f-8f82-4111-8d1c-eafa25c76f56\") " pod="openshift-marketplace/redhat-operators-l4mhr" Feb 18 01:16:55 crc kubenswrapper[4858]: I0218 01:16:55.239304 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3478f2f-8f82-4111-8d1c-eafa25c76f56-catalog-content\") pod \"redhat-operators-l4mhr\" (UID: \"d3478f2f-8f82-4111-8d1c-eafa25c76f56\") " pod="openshift-marketplace/redhat-operators-l4mhr" Feb 18 01:16:55 crc kubenswrapper[4858]: I0218 01:16:55.239875 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3478f2f-8f82-4111-8d1c-eafa25c76f56-utilities\") pod \"redhat-operators-l4mhr\" (UID: \"d3478f2f-8f82-4111-8d1c-eafa25c76f56\") " pod="openshift-marketplace/redhat-operators-l4mhr" Feb 18 01:16:55 crc kubenswrapper[4858]: I0218 01:16:55.239896 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3478f2f-8f82-4111-8d1c-eafa25c76f56-catalog-content\") pod \"redhat-operators-l4mhr\" (UID: \"d3478f2f-8f82-4111-8d1c-eafa25c76f56\") " pod="openshift-marketplace/redhat-operators-l4mhr" Feb 18 01:16:55 crc kubenswrapper[4858]: I0218 01:16:55.264091 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25b4m\" (UniqueName: \"kubernetes.io/projected/d3478f2f-8f82-4111-8d1c-eafa25c76f56-kube-api-access-25b4m\") pod \"redhat-operators-l4mhr\" (UID: \"d3478f2f-8f82-4111-8d1c-eafa25c76f56\") " pod="openshift-marketplace/redhat-operators-l4mhr" Feb 18 01:16:55 crc kubenswrapper[4858]: I0218 01:16:55.386315 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4mhr" Feb 18 01:16:55 crc kubenswrapper[4858]: I0218 01:16:55.904078 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l4mhr"] Feb 18 01:16:56 crc kubenswrapper[4858]: I0218 01:16:56.536705 4858 generic.go:334] "Generic (PLEG): container finished" podID="d3478f2f-8f82-4111-8d1c-eafa25c76f56" containerID="5f9191ecdb0f536bbfcdb0b91af39ecfbdd5d61967ef55877be701f5a8964c1f" exitCode=0 Feb 18 01:16:56 crc kubenswrapper[4858]: I0218 01:16:56.536806 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4mhr" event={"ID":"d3478f2f-8f82-4111-8d1c-eafa25c76f56","Type":"ContainerDied","Data":"5f9191ecdb0f536bbfcdb0b91af39ecfbdd5d61967ef55877be701f5a8964c1f"} Feb 18 01:16:56 crc kubenswrapper[4858]: I0218 01:16:56.536987 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4mhr" event={"ID":"d3478f2f-8f82-4111-8d1c-eafa25c76f56","Type":"ContainerStarted","Data":"7083d0172cfa0735962a85f88fd61038bf5b40859816b56d59d2aefd054f2cbc"} Feb 18 01:16:58 crc kubenswrapper[4858]: I0218 01:16:58.566897 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4mhr" event={"ID":"d3478f2f-8f82-4111-8d1c-eafa25c76f56","Type":"ContainerStarted","Data":"8bac5fab4e3d2897529a2d27dfbdd8a3d37da350e1045386a25061e11cbb9fdc"} Feb 18 01:17:02 crc kubenswrapper[4858]: I0218 01:17:02.620063 4858 generic.go:334] "Generic (PLEG): container finished" podID="d3478f2f-8f82-4111-8d1c-eafa25c76f56" containerID="8bac5fab4e3d2897529a2d27dfbdd8a3d37da350e1045386a25061e11cbb9fdc" exitCode=0 Feb 18 01:17:02 crc kubenswrapper[4858]: I0218 01:17:02.620172 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4mhr" event={"ID":"d3478f2f-8f82-4111-8d1c-eafa25c76f56","Type":"ContainerDied","Data":"8bac5fab4e3d2897529a2d27dfbdd8a3d37da350e1045386a25061e11cbb9fdc"} Feb 18 01:17:03 crc kubenswrapper[4858]: I0218 01:17:03.635593 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4mhr" event={"ID":"d3478f2f-8f82-4111-8d1c-eafa25c76f56","Type":"ContainerStarted","Data":"e3be22d68982a45d6fc0ec71e197b276421f620fad5e79746111c4a5d3fdb593"} Feb 18 01:17:03 crc kubenswrapper[4858]: I0218 01:17:03.663915 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l4mhr" podStartSLOduration=3.161233727 podStartE2EDuration="9.663880711s" podCreationTimestamp="2026-02-18 01:16:54 +0000 UTC" firstStartedPulling="2026-02-18 01:16:56.539015595 +0000 UTC m=+2569.844852317" lastFinishedPulling="2026-02-18 01:17:03.041662559 +0000 UTC m=+2576.347499301" observedRunningTime="2026-02-18 01:17:03.659468847 +0000 UTC m=+2576.965305599" watchObservedRunningTime="2026-02-18 01:17:03.663880711 +0000 UTC m=+2576.969717483" Feb 18 01:17:04 crc kubenswrapper[4858]: E0218 01:17:04.421283 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:17:05 crc kubenswrapper[4858]: I0218 01:17:05.387480 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l4mhr" Feb 18 01:17:05 crc kubenswrapper[4858]: I0218 01:17:05.387818 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l4mhr" Feb 18 01:17:05 crc kubenswrapper[4858]: E0218 01:17:05.440754 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:17:06 crc kubenswrapper[4858]: I0218 01:17:06.451753 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l4mhr" podUID="d3478f2f-8f82-4111-8d1c-eafa25c76f56" containerName="registry-server" probeResult="failure" output=< Feb 18 01:17:06 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Feb 18 01:17:06 crc kubenswrapper[4858]: > Feb 18 01:17:15 crc kubenswrapper[4858]: I0218 01:17:15.449556 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l4mhr" Feb 18 01:17:15 crc kubenswrapper[4858]: I0218 01:17:15.514977 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l4mhr" Feb 18 01:17:15 crc kubenswrapper[4858]: I0218 01:17:15.694271 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l4mhr"] Feb 18 01:17:16 crc kubenswrapper[4858]: E0218 01:17:16.422791 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:17:16 crc kubenswrapper[4858]: I0218 01:17:16.761985 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l4mhr" podUID="d3478f2f-8f82-4111-8d1c-eafa25c76f56" containerName="registry-server" containerID="cri-o://e3be22d68982a45d6fc0ec71e197b276421f620fad5e79746111c4a5d3fdb593" gracePeriod=2 Feb 18 01:17:17 crc kubenswrapper[4858]: I0218 01:17:17.370911 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4mhr" Feb 18 01:17:17 crc kubenswrapper[4858]: I0218 01:17:17.452147 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25b4m\" (UniqueName: \"kubernetes.io/projected/d3478f2f-8f82-4111-8d1c-eafa25c76f56-kube-api-access-25b4m\") pod \"d3478f2f-8f82-4111-8d1c-eafa25c76f56\" (UID: \"d3478f2f-8f82-4111-8d1c-eafa25c76f56\") " Feb 18 01:17:17 crc kubenswrapper[4858]: I0218 01:17:17.452197 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3478f2f-8f82-4111-8d1c-eafa25c76f56-utilities\") pod \"d3478f2f-8f82-4111-8d1c-eafa25c76f56\" (UID: \"d3478f2f-8f82-4111-8d1c-eafa25c76f56\") " Feb 18 01:17:17 crc kubenswrapper[4858]: I0218 01:17:17.452482 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3478f2f-8f82-4111-8d1c-eafa25c76f56-catalog-content\") pod \"d3478f2f-8f82-4111-8d1c-eafa25c76f56\" (UID: \"d3478f2f-8f82-4111-8d1c-eafa25c76f56\") " Feb 18 01:17:17 crc kubenswrapper[4858]: I0218 01:17:17.453218 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3478f2f-8f82-4111-8d1c-eafa25c76f56-utilities" (OuterVolumeSpecName: "utilities") pod "d3478f2f-8f82-4111-8d1c-eafa25c76f56" (UID: "d3478f2f-8f82-4111-8d1c-eafa25c76f56"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:17:17 crc kubenswrapper[4858]: I0218 01:17:17.460666 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3478f2f-8f82-4111-8d1c-eafa25c76f56-kube-api-access-25b4m" (OuterVolumeSpecName: "kube-api-access-25b4m") pod "d3478f2f-8f82-4111-8d1c-eafa25c76f56" (UID: "d3478f2f-8f82-4111-8d1c-eafa25c76f56"). InnerVolumeSpecName "kube-api-access-25b4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:17:17 crc kubenswrapper[4858]: I0218 01:17:17.555513 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25b4m\" (UniqueName: \"kubernetes.io/projected/d3478f2f-8f82-4111-8d1c-eafa25c76f56-kube-api-access-25b4m\") on node \"crc\" DevicePath \"\"" Feb 18 01:17:17 crc kubenswrapper[4858]: I0218 01:17:17.555550 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3478f2f-8f82-4111-8d1c-eafa25c76f56-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:17:17 crc kubenswrapper[4858]: I0218 01:17:17.602891 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3478f2f-8f82-4111-8d1c-eafa25c76f56-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3478f2f-8f82-4111-8d1c-eafa25c76f56" (UID: "d3478f2f-8f82-4111-8d1c-eafa25c76f56"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:17:17 crc kubenswrapper[4858]: I0218 01:17:17.658153 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3478f2f-8f82-4111-8d1c-eafa25c76f56-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:17:17 crc kubenswrapper[4858]: I0218 01:17:17.774176 4858 generic.go:334] "Generic (PLEG): container finished" podID="d3478f2f-8f82-4111-8d1c-eafa25c76f56" containerID="e3be22d68982a45d6fc0ec71e197b276421f620fad5e79746111c4a5d3fdb593" exitCode=0 Feb 18 01:17:17 crc kubenswrapper[4858]: I0218 01:17:17.774251 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4mhr" event={"ID":"d3478f2f-8f82-4111-8d1c-eafa25c76f56","Type":"ContainerDied","Data":"e3be22d68982a45d6fc0ec71e197b276421f620fad5e79746111c4a5d3fdb593"} Feb 18 01:17:17 crc kubenswrapper[4858]: I0218 01:17:17.774276 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4mhr" event={"ID":"d3478f2f-8f82-4111-8d1c-eafa25c76f56","Type":"ContainerDied","Data":"7083d0172cfa0735962a85f88fd61038bf5b40859816b56d59d2aefd054f2cbc"} Feb 18 01:17:17 crc kubenswrapper[4858]: I0218 01:17:17.774292 4858 scope.go:117] "RemoveContainer" containerID="e3be22d68982a45d6fc0ec71e197b276421f620fad5e79746111c4a5d3fdb593" Feb 18 01:17:17 crc kubenswrapper[4858]: I0218 01:17:17.774402 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4mhr" Feb 18 01:17:17 crc kubenswrapper[4858]: I0218 01:17:17.816814 4858 scope.go:117] "RemoveContainer" containerID="8bac5fab4e3d2897529a2d27dfbdd8a3d37da350e1045386a25061e11cbb9fdc" Feb 18 01:17:17 crc kubenswrapper[4858]: I0218 01:17:17.827183 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l4mhr"] Feb 18 01:17:17 crc kubenswrapper[4858]: I0218 01:17:17.838827 4858 scope.go:117] "RemoveContainer" containerID="5f9191ecdb0f536bbfcdb0b91af39ecfbdd5d61967ef55877be701f5a8964c1f" Feb 18 01:17:17 crc kubenswrapper[4858]: I0218 01:17:17.840141 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l4mhr"] Feb 18 01:17:17 crc kubenswrapper[4858]: I0218 01:17:17.906815 4858 scope.go:117] "RemoveContainer" containerID="e3be22d68982a45d6fc0ec71e197b276421f620fad5e79746111c4a5d3fdb593" Feb 18 01:17:17 crc kubenswrapper[4858]: E0218 01:17:17.907200 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3be22d68982a45d6fc0ec71e197b276421f620fad5e79746111c4a5d3fdb593\": container with ID starting with e3be22d68982a45d6fc0ec71e197b276421f620fad5e79746111c4a5d3fdb593 not found: ID does not exist" containerID="e3be22d68982a45d6fc0ec71e197b276421f620fad5e79746111c4a5d3fdb593" Feb 18 01:17:17 crc kubenswrapper[4858]: I0218 01:17:17.907248 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3be22d68982a45d6fc0ec71e197b276421f620fad5e79746111c4a5d3fdb593"} err="failed to get container status \"e3be22d68982a45d6fc0ec71e197b276421f620fad5e79746111c4a5d3fdb593\": rpc error: code = NotFound desc = could not find container \"e3be22d68982a45d6fc0ec71e197b276421f620fad5e79746111c4a5d3fdb593\": container with ID starting with e3be22d68982a45d6fc0ec71e197b276421f620fad5e79746111c4a5d3fdb593 not found: ID does not exist" Feb 18 01:17:17 crc kubenswrapper[4858]: I0218 01:17:17.907274 4858 scope.go:117] "RemoveContainer" containerID="8bac5fab4e3d2897529a2d27dfbdd8a3d37da350e1045386a25061e11cbb9fdc" Feb 18 01:17:17 crc kubenswrapper[4858]: E0218 01:17:17.907747 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bac5fab4e3d2897529a2d27dfbdd8a3d37da350e1045386a25061e11cbb9fdc\": container with ID starting with 8bac5fab4e3d2897529a2d27dfbdd8a3d37da350e1045386a25061e11cbb9fdc not found: ID does not exist" containerID="8bac5fab4e3d2897529a2d27dfbdd8a3d37da350e1045386a25061e11cbb9fdc" Feb 18 01:17:17 crc kubenswrapper[4858]: I0218 01:17:17.907879 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bac5fab4e3d2897529a2d27dfbdd8a3d37da350e1045386a25061e11cbb9fdc"} err="failed to get container status \"8bac5fab4e3d2897529a2d27dfbdd8a3d37da350e1045386a25061e11cbb9fdc\": rpc error: code = NotFound desc = could not find container \"8bac5fab4e3d2897529a2d27dfbdd8a3d37da350e1045386a25061e11cbb9fdc\": container with ID starting with 8bac5fab4e3d2897529a2d27dfbdd8a3d37da350e1045386a25061e11cbb9fdc not found: ID does not exist" Feb 18 01:17:17 crc kubenswrapper[4858]: I0218 01:17:17.907971 4858 scope.go:117] "RemoveContainer" containerID="5f9191ecdb0f536bbfcdb0b91af39ecfbdd5d61967ef55877be701f5a8964c1f" Feb 18 01:17:17 crc kubenswrapper[4858]: E0218 01:17:17.908403 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f9191ecdb0f536bbfcdb0b91af39ecfbdd5d61967ef55877be701f5a8964c1f\": container with ID starting with 5f9191ecdb0f536bbfcdb0b91af39ecfbdd5d61967ef55877be701f5a8964c1f not found: ID does not exist" containerID="5f9191ecdb0f536bbfcdb0b91af39ecfbdd5d61967ef55877be701f5a8964c1f" Feb 18 01:17:17 crc kubenswrapper[4858]: I0218 01:17:17.908526 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f9191ecdb0f536bbfcdb0b91af39ecfbdd5d61967ef55877be701f5a8964c1f"} err="failed to get container status \"5f9191ecdb0f536bbfcdb0b91af39ecfbdd5d61967ef55877be701f5a8964c1f\": rpc error: code = NotFound desc = could not find container \"5f9191ecdb0f536bbfcdb0b91af39ecfbdd5d61967ef55877be701f5a8964c1f\": container with ID starting with 5f9191ecdb0f536bbfcdb0b91af39ecfbdd5d61967ef55877be701f5a8964c1f not found: ID does not exist" Feb 18 01:17:18 crc kubenswrapper[4858]: E0218 01:17:18.423951 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:17:19 crc kubenswrapper[4858]: I0218 01:17:19.438546 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3478f2f-8f82-4111-8d1c-eafa25c76f56" path="/var/lib/kubelet/pods/d3478f2f-8f82-4111-8d1c-eafa25c76f56/volumes" Feb 18 01:17:29 crc kubenswrapper[4858]: E0218 01:17:29.422556 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:17:32 crc kubenswrapper[4858]: E0218 01:17:32.422019 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:17:42 crc kubenswrapper[4858]: E0218 01:17:42.421297 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:17:46 crc kubenswrapper[4858]: E0218 01:17:46.424405 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:17:57 crc kubenswrapper[4858]: I0218 01:17:57.443925 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 01:17:57 crc kubenswrapper[4858]: E0218 01:17:57.572128 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 01:17:57 crc kubenswrapper[4858]: E0218 01:17:57.572176 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 01:17:57 crc kubenswrapper[4858]: E0218 01:17:57.572291 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t22t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-h2mps_openstack(8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:17:57 crc kubenswrapper[4858]: E0218 01:17:57.573365 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:18:01 crc kubenswrapper[4858]: E0218 01:18:01.422134 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:18:12 crc kubenswrapper[4858]: E0218 01:18:12.421996 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:18:14 crc kubenswrapper[4858]: E0218 01:18:14.550806 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:18:14 crc kubenswrapper[4858]: E0218 01:18:14.551399 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:18:14 crc kubenswrapper[4858]: E0218 01:18:14.551698 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n75h65dh56ch8chbh597h687h696h57bhdchcbhb6h9h648hb6h686h666h57ch557h55ch68ch76h686h5f7hb7hc6h68hdh67fh8bhd7hbcq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6qb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(1b28954c-8d35-4f43-a44b-307a56f6fff5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:18:14 crc kubenswrapper[4858]: E0218 01:18:14.553044 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:18:26 crc kubenswrapper[4858]: E0218 01:18:26.422686 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:18:28 crc kubenswrapper[4858]: E0218 01:18:28.421775 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:18:40 crc kubenswrapper[4858]: E0218 01:18:40.421851 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:18:41 crc kubenswrapper[4858]: E0218 01:18:41.424551 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:18:54 crc kubenswrapper[4858]: E0218 01:18:54.422567 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:18:55 crc kubenswrapper[4858]: I0218 01:18:55.265208 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:18:55 crc kubenswrapper[4858]: I0218 01:18:55.265601 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:18:56 crc kubenswrapper[4858]: E0218 01:18:56.421313 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:19:07 crc kubenswrapper[4858]: E0218 01:19:07.454020 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:19:07 crc kubenswrapper[4858]: E0218 01:19:07.454675 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:19:08 crc kubenswrapper[4858]: I0218 01:19:08.760451 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dmdlz"] Feb 18 01:19:08 crc kubenswrapper[4858]: E0218 01:19:08.761433 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3478f2f-8f82-4111-8d1c-eafa25c76f56" containerName="extract-content" Feb 18 01:19:08 crc kubenswrapper[4858]: I0218 01:19:08.761452 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3478f2f-8f82-4111-8d1c-eafa25c76f56" containerName="extract-content" Feb 18 01:19:08 crc kubenswrapper[4858]: E0218 01:19:08.761487 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3478f2f-8f82-4111-8d1c-eafa25c76f56" containerName="extract-utilities" Feb 18 01:19:08 crc kubenswrapper[4858]: I0218 01:19:08.761512 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3478f2f-8f82-4111-8d1c-eafa25c76f56" containerName="extract-utilities" Feb 18 01:19:08 crc kubenswrapper[4858]: E0218 01:19:08.761549 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3478f2f-8f82-4111-8d1c-eafa25c76f56" containerName="registry-server" Feb 18 01:19:08 crc kubenswrapper[4858]: I0218 01:19:08.761557 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3478f2f-8f82-4111-8d1c-eafa25c76f56" containerName="registry-server" Feb 18 01:19:08 crc kubenswrapper[4858]: I0218 01:19:08.761856 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3478f2f-8f82-4111-8d1c-eafa25c76f56" containerName="registry-server" Feb 18 01:19:08 crc kubenswrapper[4858]: I0218 01:19:08.763889 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dmdlz" Feb 18 01:19:08 crc kubenswrapper[4858]: I0218 01:19:08.771078 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dmdlz"] Feb 18 01:19:08 crc kubenswrapper[4858]: I0218 01:19:08.897154 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca922c7-2f96-4553-9d73-90ec93132ab0-utilities\") pod \"certified-operators-dmdlz\" (UID: \"9ca922c7-2f96-4553-9d73-90ec93132ab0\") " pod="openshift-marketplace/certified-operators-dmdlz" Feb 18 01:19:08 crc kubenswrapper[4858]: I0218 01:19:08.897650 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsvml\" (UniqueName: \"kubernetes.io/projected/9ca922c7-2f96-4553-9d73-90ec93132ab0-kube-api-access-nsvml\") pod \"certified-operators-dmdlz\" (UID: \"9ca922c7-2f96-4553-9d73-90ec93132ab0\") " pod="openshift-marketplace/certified-operators-dmdlz" Feb 18 01:19:08 crc kubenswrapper[4858]: I0218 01:19:08.897719 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca922c7-2f96-4553-9d73-90ec93132ab0-catalog-content\") pod \"certified-operators-dmdlz\" (UID: \"9ca922c7-2f96-4553-9d73-90ec93132ab0\") " pod="openshift-marketplace/certified-operators-dmdlz" Feb 18 01:19:08 crc kubenswrapper[4858]: I0218 01:19:08.999257 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsvml\" (UniqueName: \"kubernetes.io/projected/9ca922c7-2f96-4553-9d73-90ec93132ab0-kube-api-access-nsvml\") pod \"certified-operators-dmdlz\" (UID: \"9ca922c7-2f96-4553-9d73-90ec93132ab0\") " pod="openshift-marketplace/certified-operators-dmdlz" Feb 18 01:19:08 crc kubenswrapper[4858]: I0218 01:19:08.999348 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca922c7-2f96-4553-9d73-90ec93132ab0-catalog-content\") pod \"certified-operators-dmdlz\" (UID: \"9ca922c7-2f96-4553-9d73-90ec93132ab0\") " pod="openshift-marketplace/certified-operators-dmdlz" Feb 18 01:19:08 crc kubenswrapper[4858]: I0218 01:19:08.999386 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca922c7-2f96-4553-9d73-90ec93132ab0-utilities\") pod \"certified-operators-dmdlz\" (UID: \"9ca922c7-2f96-4553-9d73-90ec93132ab0\") " pod="openshift-marketplace/certified-operators-dmdlz" Feb 18 01:19:08 crc kubenswrapper[4858]: I0218 01:19:08.999883 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca922c7-2f96-4553-9d73-90ec93132ab0-catalog-content\") pod \"certified-operators-dmdlz\" (UID: \"9ca922c7-2f96-4553-9d73-90ec93132ab0\") " pod="openshift-marketplace/certified-operators-dmdlz" Feb 18 01:19:08 crc kubenswrapper[4858]: I0218 01:19:08.999896 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca922c7-2f96-4553-9d73-90ec93132ab0-utilities\") pod \"certified-operators-dmdlz\" (UID: \"9ca922c7-2f96-4553-9d73-90ec93132ab0\") " pod="openshift-marketplace/certified-operators-dmdlz" Feb 18 01:19:09 crc kubenswrapper[4858]: I0218 01:19:09.017226 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsvml\" (UniqueName: \"kubernetes.io/projected/9ca922c7-2f96-4553-9d73-90ec93132ab0-kube-api-access-nsvml\") pod \"certified-operators-dmdlz\" (UID: \"9ca922c7-2f96-4553-9d73-90ec93132ab0\") " pod="openshift-marketplace/certified-operators-dmdlz" Feb 18 01:19:09 crc kubenswrapper[4858]: I0218 01:19:09.086928 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dmdlz" Feb 18 01:19:09 crc kubenswrapper[4858]: I0218 01:19:09.579595 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dmdlz"] Feb 18 01:19:10 crc kubenswrapper[4858]: I0218 01:19:10.082651 4858 generic.go:334] "Generic (PLEG): container finished" podID="9ca922c7-2f96-4553-9d73-90ec93132ab0" containerID="71608fe1a5263989e0eabccd6d54aae871d68c77ea3a84b66cde9e0408c4e6cb" exitCode=0 Feb 18 01:19:10 crc kubenswrapper[4858]: I0218 01:19:10.082718 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmdlz" event={"ID":"9ca922c7-2f96-4553-9d73-90ec93132ab0","Type":"ContainerDied","Data":"71608fe1a5263989e0eabccd6d54aae871d68c77ea3a84b66cde9e0408c4e6cb"} Feb 18 01:19:10 crc kubenswrapper[4858]: I0218 01:19:10.082777 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmdlz" event={"ID":"9ca922c7-2f96-4553-9d73-90ec93132ab0","Type":"ContainerStarted","Data":"c098598d64eeb7861614b6ace04cb84e663f63165bf7f6e60c5573fef02b559f"} Feb 18 01:19:16 crc kubenswrapper[4858]: I0218 01:19:16.144204 4858 generic.go:334] "Generic (PLEG): container finished" podID="9ca922c7-2f96-4553-9d73-90ec93132ab0" containerID="6b7ab317938444c3018e8518724337e4a97c8f5437b609cd3cd05764dafa207b" exitCode=0 Feb 18 01:19:16 crc kubenswrapper[4858]: I0218 01:19:16.144398 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmdlz" event={"ID":"9ca922c7-2f96-4553-9d73-90ec93132ab0","Type":"ContainerDied","Data":"6b7ab317938444c3018e8518724337e4a97c8f5437b609cd3cd05764dafa207b"} Feb 18 01:19:17 crc kubenswrapper[4858]: I0218 01:19:17.163167 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmdlz" event={"ID":"9ca922c7-2f96-4553-9d73-90ec93132ab0","Type":"ContainerStarted","Data":"4ba06bf42ab17060a8311f0a447a4d262a89db6ea82b2b21620c06dd2e8f9126"} Feb 18 01:19:17 crc kubenswrapper[4858]: I0218 01:19:17.197031 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dmdlz" podStartSLOduration=2.751920641 podStartE2EDuration="9.197002288s" podCreationTimestamp="2026-02-18 01:19:08 +0000 UTC" firstStartedPulling="2026-02-18 01:19:10.086470139 +0000 UTC m=+2703.392306881" lastFinishedPulling="2026-02-18 01:19:16.531551796 +0000 UTC m=+2709.837388528" observedRunningTime="2026-02-18 01:19:17.193775711 +0000 UTC m=+2710.499612483" watchObservedRunningTime="2026-02-18 01:19:17.197002288 +0000 UTC m=+2710.502839050" Feb 18 01:19:19 crc kubenswrapper[4858]: I0218 01:19:19.087959 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dmdlz" Feb 18 01:19:19 crc kubenswrapper[4858]: I0218 01:19:19.088368 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dmdlz" Feb 18 01:19:19 crc kubenswrapper[4858]: I0218 01:19:19.153324 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dmdlz" Feb 18 01:19:19 crc kubenswrapper[4858]: E0218 01:19:19.422586 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:19:20 crc kubenswrapper[4858]: E0218 01:19:20.420844 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:19:25 crc kubenswrapper[4858]: I0218 01:19:25.265294 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:19:25 crc kubenswrapper[4858]: I0218 01:19:25.265787 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:19:29 crc kubenswrapper[4858]: I0218 01:19:29.161123 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dmdlz" Feb 18 01:19:29 crc kubenswrapper[4858]: I0218 01:19:29.248011 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dmdlz"] Feb 18 01:19:29 crc kubenswrapper[4858]: I0218 01:19:29.317241 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dpkwc"] Feb 18 01:19:29 crc kubenswrapper[4858]: I0218 01:19:29.317537 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dpkwc" podUID="0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1" containerName="registry-server" containerID="cri-o://fd7edc35a90ef14d32cfc72cb0fdebea3546cc4c267b7f85caed196da8473816" gracePeriod=2 Feb 18 01:19:29 crc kubenswrapper[4858]: I0218 01:19:29.868511 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dpkwc" Feb 18 01:19:29 crc kubenswrapper[4858]: I0218 01:19:29.937147 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1-utilities\") pod \"0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1\" (UID: \"0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1\") " Feb 18 01:19:29 crc kubenswrapper[4858]: I0218 01:19:29.937275 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qh4b7\" (UniqueName: \"kubernetes.io/projected/0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1-kube-api-access-qh4b7\") pod \"0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1\" (UID: \"0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1\") " Feb 18 01:19:29 crc kubenswrapper[4858]: I0218 01:19:29.937310 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1-catalog-content\") pod \"0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1\" (UID: \"0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1\") " Feb 18 01:19:29 crc kubenswrapper[4858]: I0218 01:19:29.944225 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1-utilities" (OuterVolumeSpecName: "utilities") pod "0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1" (UID: "0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:19:29 crc kubenswrapper[4858]: I0218 01:19:29.959733 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1-kube-api-access-qh4b7" (OuterVolumeSpecName: "kube-api-access-qh4b7") pod "0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1" (UID: "0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1"). InnerVolumeSpecName "kube-api-access-qh4b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:19:30 crc kubenswrapper[4858]: I0218 01:19:30.012998 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1" (UID: "0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:19:30 crc kubenswrapper[4858]: I0218 01:19:30.039089 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qh4b7\" (UniqueName: \"kubernetes.io/projected/0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1-kube-api-access-qh4b7\") on node \"crc\" DevicePath \"\"" Feb 18 01:19:30 crc kubenswrapper[4858]: I0218 01:19:30.039122 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:19:30 crc kubenswrapper[4858]: I0218 01:19:30.039131 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:19:30 crc kubenswrapper[4858]: I0218 01:19:30.285082 4858 generic.go:334] "Generic (PLEG): container finished" podID="0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1" containerID="fd7edc35a90ef14d32cfc72cb0fdebea3546cc4c267b7f85caed196da8473816" exitCode=0 Feb 18 01:19:30 crc kubenswrapper[4858]: I0218 01:19:30.285137 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dpkwc" Feb 18 01:19:30 crc kubenswrapper[4858]: I0218 01:19:30.285159 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpkwc" event={"ID":"0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1","Type":"ContainerDied","Data":"fd7edc35a90ef14d32cfc72cb0fdebea3546cc4c267b7f85caed196da8473816"} Feb 18 01:19:30 crc kubenswrapper[4858]: I0218 01:19:30.285520 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpkwc" event={"ID":"0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1","Type":"ContainerDied","Data":"898b1be81748defcc91b07f07117b2729cd4d10dec2470530e47c70a8598bbcf"} Feb 18 01:19:30 crc kubenswrapper[4858]: I0218 01:19:30.285575 4858 scope.go:117] "RemoveContainer" containerID="fd7edc35a90ef14d32cfc72cb0fdebea3546cc4c267b7f85caed196da8473816" Feb 18 01:19:30 crc kubenswrapper[4858]: I0218 01:19:30.317642 4858 scope.go:117] "RemoveContainer" containerID="f87c4f894b07c149b5c9e7da9eead2b979c76f346e22a585f6041c66d80765fe" Feb 18 01:19:30 crc kubenswrapper[4858]: I0218 01:19:30.334775 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dpkwc"] Feb 18 01:19:30 crc kubenswrapper[4858]: I0218 01:19:30.351819 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dpkwc"] Feb 18 01:19:30 crc kubenswrapper[4858]: I0218 01:19:30.378749 4858 scope.go:117] "RemoveContainer" containerID="802bd5b500c55911ab51bad114f3d23fef23c8d606e9a029aaffdda89780747d" Feb 18 01:19:30 crc kubenswrapper[4858]: I0218 01:19:30.429464 4858 scope.go:117] "RemoveContainer" containerID="fd7edc35a90ef14d32cfc72cb0fdebea3546cc4c267b7f85caed196da8473816" Feb 18 01:19:30 crc kubenswrapper[4858]: E0218 01:19:30.429709 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:19:30 crc kubenswrapper[4858]: E0218 01:19:30.429937 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd7edc35a90ef14d32cfc72cb0fdebea3546cc4c267b7f85caed196da8473816\": container with ID starting with fd7edc35a90ef14d32cfc72cb0fdebea3546cc4c267b7f85caed196da8473816 not found: ID does not exist" containerID="fd7edc35a90ef14d32cfc72cb0fdebea3546cc4c267b7f85caed196da8473816" Feb 18 01:19:30 crc kubenswrapper[4858]: I0218 01:19:30.430000 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd7edc35a90ef14d32cfc72cb0fdebea3546cc4c267b7f85caed196da8473816"} err="failed to get container status \"fd7edc35a90ef14d32cfc72cb0fdebea3546cc4c267b7f85caed196da8473816\": rpc error: code = NotFound desc = could not find container \"fd7edc35a90ef14d32cfc72cb0fdebea3546cc4c267b7f85caed196da8473816\": container with ID starting with fd7edc35a90ef14d32cfc72cb0fdebea3546cc4c267b7f85caed196da8473816 not found: ID does not exist" Feb 18 01:19:30 crc kubenswrapper[4858]: I0218 01:19:30.430027 4858 scope.go:117] "RemoveContainer" containerID="f87c4f894b07c149b5c9e7da9eead2b979c76f346e22a585f6041c66d80765fe" Feb 18 01:19:30 crc kubenswrapper[4858]: E0218 01:19:30.430261 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f87c4f894b07c149b5c9e7da9eead2b979c76f346e22a585f6041c66d80765fe\": container with ID starting with f87c4f894b07c149b5c9e7da9eead2b979c76f346e22a585f6041c66d80765fe not found: ID does not exist" containerID="f87c4f894b07c149b5c9e7da9eead2b979c76f346e22a585f6041c66d80765fe" Feb 18 01:19:30 crc kubenswrapper[4858]: I0218 01:19:30.430303 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f87c4f894b07c149b5c9e7da9eead2b979c76f346e22a585f6041c66d80765fe"} err="failed to get container status \"f87c4f894b07c149b5c9e7da9eead2b979c76f346e22a585f6041c66d80765fe\": rpc error: code = NotFound desc = could not find container \"f87c4f894b07c149b5c9e7da9eead2b979c76f346e22a585f6041c66d80765fe\": container with ID starting with f87c4f894b07c149b5c9e7da9eead2b979c76f346e22a585f6041c66d80765fe not found: ID does not exist" Feb 18 01:19:30 crc kubenswrapper[4858]: I0218 01:19:30.430322 4858 scope.go:117] "RemoveContainer" containerID="802bd5b500c55911ab51bad114f3d23fef23c8d606e9a029aaffdda89780747d" Feb 18 01:19:30 crc kubenswrapper[4858]: E0218 01:19:30.430535 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"802bd5b500c55911ab51bad114f3d23fef23c8d606e9a029aaffdda89780747d\": container with ID starting with 802bd5b500c55911ab51bad114f3d23fef23c8d606e9a029aaffdda89780747d not found: ID does not exist" containerID="802bd5b500c55911ab51bad114f3d23fef23c8d606e9a029aaffdda89780747d" Feb 18 01:19:30 crc kubenswrapper[4858]: I0218 01:19:30.430556 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"802bd5b500c55911ab51bad114f3d23fef23c8d606e9a029aaffdda89780747d"} err="failed to get container status \"802bd5b500c55911ab51bad114f3d23fef23c8d606e9a029aaffdda89780747d\": rpc error: code = NotFound desc = could not find container \"802bd5b500c55911ab51bad114f3d23fef23c8d606e9a029aaffdda89780747d\": container with ID starting with 802bd5b500c55911ab51bad114f3d23fef23c8d606e9a029aaffdda89780747d not found: ID does not exist" Feb 18 01:19:31 crc kubenswrapper[4858]: I0218 01:19:31.435073 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1" path="/var/lib/kubelet/pods/0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1/volumes" Feb 18 01:19:35 crc kubenswrapper[4858]: E0218 01:19:35.421706 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:19:41 crc kubenswrapper[4858]: E0218 01:19:41.422380 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:19:50 crc kubenswrapper[4858]: E0218 01:19:50.423339 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:19:55 crc kubenswrapper[4858]: I0218 01:19:55.265633 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:19:55 crc kubenswrapper[4858]: I0218 01:19:55.266235 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:19:55 crc kubenswrapper[4858]: I0218 01:19:55.266286 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 01:19:55 crc kubenswrapper[4858]: I0218 01:19:55.267137 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"313584f84c2f9b206f79e2d3fe4418ef5be7b70b242870da3851fa358bfd4f44"} pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 01:19:55 crc kubenswrapper[4858]: I0218 01:19:55.267200 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" containerID="cri-o://313584f84c2f9b206f79e2d3fe4418ef5be7b70b242870da3851fa358bfd4f44" gracePeriod=600 Feb 18 01:19:55 crc kubenswrapper[4858]: E0218 01:19:55.420615 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:19:55 crc kubenswrapper[4858]: I0218 01:19:55.544256 4858 generic.go:334] "Generic (PLEG): container finished" podID="7172df49-6116-4968-a2b5-a1afb116568b" containerID="313584f84c2f9b206f79e2d3fe4418ef5be7b70b242870da3851fa358bfd4f44" exitCode=0 Feb 18 01:19:55 crc kubenswrapper[4858]: I0218 01:19:55.544570 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerDied","Data":"313584f84c2f9b206f79e2d3fe4418ef5be7b70b242870da3851fa358bfd4f44"} Feb 18 01:19:55 crc kubenswrapper[4858]: I0218 01:19:55.544607 4858 scope.go:117] "RemoveContainer" containerID="45b32d37d7d547e352522c3baaa916052872e610c3192b3a141fb05c4a5d15fc" Feb 18 01:19:56 crc kubenswrapper[4858]: I0218 01:19:56.557669 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerStarted","Data":"e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664"} Feb 18 01:20:03 crc kubenswrapper[4858]: E0218 01:20:03.421664 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:20:08 crc kubenswrapper[4858]: E0218 01:20:08.421625 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:20:18 crc kubenswrapper[4858]: E0218 01:20:18.421766 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:20:20 crc kubenswrapper[4858]: E0218 01:20:20.421032 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:20:32 crc kubenswrapper[4858]: E0218 01:20:32.422403 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:20:33 crc kubenswrapper[4858]: E0218 01:20:33.424403 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:20:45 crc kubenswrapper[4858]: E0218 01:20:45.422583 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:20:46 crc kubenswrapper[4858]: E0218 01:20:46.421646 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:20:57 crc kubenswrapper[4858]: E0218 01:20:57.430834 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:20:58 crc kubenswrapper[4858]: E0218 01:20:58.422265 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:21:11 crc kubenswrapper[4858]: E0218 01:21:11.422079 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:21:12 crc kubenswrapper[4858]: E0218 01:21:12.421273 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:21:21 crc kubenswrapper[4858]: I0218 01:21:21.965370 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xd57t"] Feb 18 01:21:21 crc kubenswrapper[4858]: E0218 01:21:21.966408 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1" containerName="registry-server" Feb 18 01:21:21 crc kubenswrapper[4858]: I0218 01:21:21.966426 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1" containerName="registry-server" Feb 18 01:21:21 crc kubenswrapper[4858]: E0218 01:21:21.966449 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1" containerName="extract-utilities" Feb 18 01:21:21 crc kubenswrapper[4858]: I0218 01:21:21.966456 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1" containerName="extract-utilities" Feb 18 01:21:21 crc kubenswrapper[4858]: E0218 01:21:21.966468 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1" containerName="extract-content" Feb 18 01:21:21 crc kubenswrapper[4858]: I0218 01:21:21.966475 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1" containerName="extract-content" Feb 18 01:21:21 crc kubenswrapper[4858]: I0218 01:21:21.966713 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c53f3ee-32ae-4fb7-9dae-4dfb8b9f97b1" containerName="registry-server" Feb 18 01:21:21 crc kubenswrapper[4858]: I0218 01:21:21.970815 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xd57t" Feb 18 01:21:22 crc kubenswrapper[4858]: I0218 01:21:22.021752 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xd57t"] Feb 18 01:21:22 crc kubenswrapper[4858]: I0218 01:21:22.059305 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3b76cd8-2d23-4fd9-8647-48f4de618f01-catalog-content\") pod \"community-operators-xd57t\" (UID: \"e3b76cd8-2d23-4fd9-8647-48f4de618f01\") " pod="openshift-marketplace/community-operators-xd57t" Feb 18 01:21:22 crc kubenswrapper[4858]: I0218 01:21:22.059383 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3b76cd8-2d23-4fd9-8647-48f4de618f01-utilities\") pod \"community-operators-xd57t\" (UID: \"e3b76cd8-2d23-4fd9-8647-48f4de618f01\") " pod="openshift-marketplace/community-operators-xd57t" Feb 18 01:21:22 crc kubenswrapper[4858]: I0218 01:21:22.059531 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwf4s\" (UniqueName: \"kubernetes.io/projected/e3b76cd8-2d23-4fd9-8647-48f4de618f01-kube-api-access-nwf4s\") pod \"community-operators-xd57t\" (UID: \"e3b76cd8-2d23-4fd9-8647-48f4de618f01\") " pod="openshift-marketplace/community-operators-xd57t" Feb 18 01:21:22 crc kubenswrapper[4858]: I0218 01:21:22.161526 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwf4s\" (UniqueName: \"kubernetes.io/projected/e3b76cd8-2d23-4fd9-8647-48f4de618f01-kube-api-access-nwf4s\") pod \"community-operators-xd57t\" (UID: \"e3b76cd8-2d23-4fd9-8647-48f4de618f01\") " pod="openshift-marketplace/community-operators-xd57t" Feb 18 01:21:22 crc kubenswrapper[4858]: I0218 01:21:22.161663 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3b76cd8-2d23-4fd9-8647-48f4de618f01-catalog-content\") pod \"community-operators-xd57t\" (UID: \"e3b76cd8-2d23-4fd9-8647-48f4de618f01\") " pod="openshift-marketplace/community-operators-xd57t" Feb 18 01:21:22 crc kubenswrapper[4858]: I0218 01:21:22.161694 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3b76cd8-2d23-4fd9-8647-48f4de618f01-utilities\") pod \"community-operators-xd57t\" (UID: \"e3b76cd8-2d23-4fd9-8647-48f4de618f01\") " pod="openshift-marketplace/community-operators-xd57t" Feb 18 01:21:22 crc kubenswrapper[4858]: I0218 01:21:22.162181 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3b76cd8-2d23-4fd9-8647-48f4de618f01-utilities\") pod \"community-operators-xd57t\" (UID: \"e3b76cd8-2d23-4fd9-8647-48f4de618f01\") " pod="openshift-marketplace/community-operators-xd57t" Feb 18 01:21:22 crc kubenswrapper[4858]: I0218 01:21:22.162254 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3b76cd8-2d23-4fd9-8647-48f4de618f01-catalog-content\") pod \"community-operators-xd57t\" (UID: \"e3b76cd8-2d23-4fd9-8647-48f4de618f01\") " pod="openshift-marketplace/community-operators-xd57t" Feb 18 01:21:22 crc kubenswrapper[4858]: I0218 01:21:22.184958 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwf4s\" (UniqueName: \"kubernetes.io/projected/e3b76cd8-2d23-4fd9-8647-48f4de618f01-kube-api-access-nwf4s\") pod \"community-operators-xd57t\" (UID: \"e3b76cd8-2d23-4fd9-8647-48f4de618f01\") " pod="openshift-marketplace/community-operators-xd57t" Feb 18 01:21:22 crc kubenswrapper[4858]: I0218 01:21:22.318185 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xd57t" Feb 18 01:21:22 crc kubenswrapper[4858]: W0218 01:21:22.847908 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3b76cd8_2d23_4fd9_8647_48f4de618f01.slice/crio-3e6137daa748fddc43b14097dc62ec0f6738e3179c537fa2099950337f07e493 WatchSource:0}: Error finding container 3e6137daa748fddc43b14097dc62ec0f6738e3179c537fa2099950337f07e493: Status 404 returned error can't find the container with id 3e6137daa748fddc43b14097dc62ec0f6738e3179c537fa2099950337f07e493 Feb 18 01:21:22 crc kubenswrapper[4858]: I0218 01:21:22.850235 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xd57t"] Feb 18 01:21:23 crc kubenswrapper[4858]: E0218 01:21:23.220504 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3b76cd8_2d23_4fd9_8647_48f4de618f01.slice/crio-conmon-9d8f5f2126a201ac863a321912a8c307bfee941c885819b4baf459eb39cf04a7.scope\": RecentStats: unable to find data in memory cache]" Feb 18 01:21:23 crc kubenswrapper[4858]: E0218 01:21:23.421117 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:21:23 crc kubenswrapper[4858]: I0218 01:21:23.558440 4858 generic.go:334] "Generic (PLEG): container finished" podID="13898d25-206e-4010-9f2f-54546c48aee6" containerID="452ba6df3f0a1e6cfe64aaa272d9cf694fe7f906a4b06111915e6ffe2826939e" exitCode=2 Feb 18 01:21:23 crc kubenswrapper[4858]: I0218 01:21:23.558591 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj" event={"ID":"13898d25-206e-4010-9f2f-54546c48aee6","Type":"ContainerDied","Data":"452ba6df3f0a1e6cfe64aaa272d9cf694fe7f906a4b06111915e6ffe2826939e"} Feb 18 01:21:23 crc kubenswrapper[4858]: I0218 01:21:23.562335 4858 generic.go:334] "Generic (PLEG): container finished" podID="e3b76cd8-2d23-4fd9-8647-48f4de618f01" containerID="9d8f5f2126a201ac863a321912a8c307bfee941c885819b4baf459eb39cf04a7" exitCode=0 Feb 18 01:21:23 crc kubenswrapper[4858]: I0218 01:21:23.562433 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xd57t" event={"ID":"e3b76cd8-2d23-4fd9-8647-48f4de618f01","Type":"ContainerDied","Data":"9d8f5f2126a201ac863a321912a8c307bfee941c885819b4baf459eb39cf04a7"} Feb 18 01:21:23 crc kubenswrapper[4858]: I0218 01:21:23.562534 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xd57t" event={"ID":"e3b76cd8-2d23-4fd9-8647-48f4de618f01","Type":"ContainerStarted","Data":"3e6137daa748fddc43b14097dc62ec0f6738e3179c537fa2099950337f07e493"} Feb 18 01:21:25 crc kubenswrapper[4858]: I0218 01:21:25.175221 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj" Feb 18 01:21:25 crc kubenswrapper[4858]: I0218 01:21:25.250569 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r25wf\" (UniqueName: \"kubernetes.io/projected/13898d25-206e-4010-9f2f-54546c48aee6-kube-api-access-r25wf\") pod \"13898d25-206e-4010-9f2f-54546c48aee6\" (UID: \"13898d25-206e-4010-9f2f-54546c48aee6\") " Feb 18 01:21:25 crc kubenswrapper[4858]: I0218 01:21:25.250850 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13898d25-206e-4010-9f2f-54546c48aee6-inventory\") pod \"13898d25-206e-4010-9f2f-54546c48aee6\" (UID: \"13898d25-206e-4010-9f2f-54546c48aee6\") " Feb 18 01:21:25 crc kubenswrapper[4858]: I0218 01:21:25.250985 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13898d25-206e-4010-9f2f-54546c48aee6-ssh-key-openstack-edpm-ipam\") pod \"13898d25-206e-4010-9f2f-54546c48aee6\" (UID: \"13898d25-206e-4010-9f2f-54546c48aee6\") " Feb 18 01:21:25 crc kubenswrapper[4858]: I0218 01:21:25.258289 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13898d25-206e-4010-9f2f-54546c48aee6-kube-api-access-r25wf" (OuterVolumeSpecName: "kube-api-access-r25wf") pod "13898d25-206e-4010-9f2f-54546c48aee6" (UID: "13898d25-206e-4010-9f2f-54546c48aee6"). InnerVolumeSpecName "kube-api-access-r25wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:21:25 crc kubenswrapper[4858]: I0218 01:21:25.291723 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13898d25-206e-4010-9f2f-54546c48aee6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "13898d25-206e-4010-9f2f-54546c48aee6" (UID: "13898d25-206e-4010-9f2f-54546c48aee6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:21:25 crc kubenswrapper[4858]: I0218 01:21:25.308608 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13898d25-206e-4010-9f2f-54546c48aee6-inventory" (OuterVolumeSpecName: "inventory") pod "13898d25-206e-4010-9f2f-54546c48aee6" (UID: "13898d25-206e-4010-9f2f-54546c48aee6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:21:25 crc kubenswrapper[4858]: I0218 01:21:25.353527 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r25wf\" (UniqueName: \"kubernetes.io/projected/13898d25-206e-4010-9f2f-54546c48aee6-kube-api-access-r25wf\") on node \"crc\" DevicePath \"\"" Feb 18 01:21:25 crc kubenswrapper[4858]: I0218 01:21:25.353565 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13898d25-206e-4010-9f2f-54546c48aee6-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 01:21:25 crc kubenswrapper[4858]: I0218 01:21:25.353582 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13898d25-206e-4010-9f2f-54546c48aee6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 01:21:25 crc kubenswrapper[4858]: I0218 01:21:25.591369 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj" event={"ID":"13898d25-206e-4010-9f2f-54546c48aee6","Type":"ContainerDied","Data":"03094c0c7b972abeb330ec70fc4f07936c75ccd7dfdda8b8d52e8197de6684bf"} Feb 18 01:21:25 crc kubenswrapper[4858]: I0218 01:21:25.591781 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03094c0c7b972abeb330ec70fc4f07936c75ccd7dfdda8b8d52e8197de6684bf" Feb 18 01:21:25 crc kubenswrapper[4858]: I0218 01:21:25.591404 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj" Feb 18 01:21:25 crc kubenswrapper[4858]: I0218 01:21:25.596043 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xd57t" event={"ID":"e3b76cd8-2d23-4fd9-8647-48f4de618f01","Type":"ContainerStarted","Data":"eff3eee0a4fa36621ed9175a2d753a40d1c8287d98fbcc01f6b70329332029a7"} Feb 18 01:21:26 crc kubenswrapper[4858]: E0218 01:21:26.423359 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:21:26 crc kubenswrapper[4858]: I0218 01:21:26.607760 4858 generic.go:334] "Generic (PLEG): container finished" podID="e3b76cd8-2d23-4fd9-8647-48f4de618f01" containerID="eff3eee0a4fa36621ed9175a2d753a40d1c8287d98fbcc01f6b70329332029a7" exitCode=0 Feb 18 01:21:26 crc kubenswrapper[4858]: I0218 01:21:26.607832 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xd57t" event={"ID":"e3b76cd8-2d23-4fd9-8647-48f4de618f01","Type":"ContainerDied","Data":"eff3eee0a4fa36621ed9175a2d753a40d1c8287d98fbcc01f6b70329332029a7"} Feb 18 01:21:27 crc kubenswrapper[4858]: I0218 01:21:27.618156 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xd57t" event={"ID":"e3b76cd8-2d23-4fd9-8647-48f4de618f01","Type":"ContainerStarted","Data":"68cfde9f3171bf59429b32c6866413d6e381b3c38eaab0ee0ac7e2a9c0048f07"} Feb 18 01:21:27 crc kubenswrapper[4858]: I0218 01:21:27.637644 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xd57t" podStartSLOduration=3.204409903 podStartE2EDuration="6.637629822s" podCreationTimestamp="2026-02-18 01:21:21 +0000 UTC" firstStartedPulling="2026-02-18 01:21:23.565619191 +0000 UTC m=+2836.871455933" lastFinishedPulling="2026-02-18 01:21:26.99883908 +0000 UTC m=+2840.304675852" observedRunningTime="2026-02-18 01:21:27.636096316 +0000 UTC m=+2840.941933048" watchObservedRunningTime="2026-02-18 01:21:27.637629822 +0000 UTC m=+2840.943466544" Feb 18 01:21:32 crc kubenswrapper[4858]: I0218 01:21:32.318414 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xd57t" Feb 18 01:21:32 crc kubenswrapper[4858]: I0218 01:21:32.318999 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xd57t" Feb 18 01:21:32 crc kubenswrapper[4858]: I0218 01:21:32.406637 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xd57t" Feb 18 01:21:32 crc kubenswrapper[4858]: I0218 01:21:32.766094 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xd57t" Feb 18 01:21:32 crc kubenswrapper[4858]: I0218 01:21:32.846585 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xd57t"] Feb 18 01:21:34 crc kubenswrapper[4858]: E0218 01:21:34.423286 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:21:34 crc kubenswrapper[4858]: I0218 01:21:34.701603 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xd57t" podUID="e3b76cd8-2d23-4fd9-8647-48f4de618f01" containerName="registry-server" containerID="cri-o://68cfde9f3171bf59429b32c6866413d6e381b3c38eaab0ee0ac7e2a9c0048f07" gracePeriod=2 Feb 18 01:21:35 crc kubenswrapper[4858]: I0218 01:21:35.342262 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xd57t" Feb 18 01:21:35 crc kubenswrapper[4858]: I0218 01:21:35.510713 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3b76cd8-2d23-4fd9-8647-48f4de618f01-catalog-content\") pod \"e3b76cd8-2d23-4fd9-8647-48f4de618f01\" (UID: \"e3b76cd8-2d23-4fd9-8647-48f4de618f01\") " Feb 18 01:21:35 crc kubenswrapper[4858]: I0218 01:21:35.510817 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3b76cd8-2d23-4fd9-8647-48f4de618f01-utilities\") pod \"e3b76cd8-2d23-4fd9-8647-48f4de618f01\" (UID: \"e3b76cd8-2d23-4fd9-8647-48f4de618f01\") " Feb 18 01:21:35 crc kubenswrapper[4858]: I0218 01:21:35.510924 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwf4s\" (UniqueName: \"kubernetes.io/projected/e3b76cd8-2d23-4fd9-8647-48f4de618f01-kube-api-access-nwf4s\") pod \"e3b76cd8-2d23-4fd9-8647-48f4de618f01\" (UID: \"e3b76cd8-2d23-4fd9-8647-48f4de618f01\") " Feb 18 01:21:35 crc kubenswrapper[4858]: I0218 01:21:35.518950 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3b76cd8-2d23-4fd9-8647-48f4de618f01-utilities" (OuterVolumeSpecName: "utilities") pod "e3b76cd8-2d23-4fd9-8647-48f4de618f01" (UID: "e3b76cd8-2d23-4fd9-8647-48f4de618f01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:21:35 crc kubenswrapper[4858]: I0218 01:21:35.525811 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b76cd8-2d23-4fd9-8647-48f4de618f01-kube-api-access-nwf4s" (OuterVolumeSpecName: "kube-api-access-nwf4s") pod "e3b76cd8-2d23-4fd9-8647-48f4de618f01" (UID: "e3b76cd8-2d23-4fd9-8647-48f4de618f01"). InnerVolumeSpecName "kube-api-access-nwf4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:21:35 crc kubenswrapper[4858]: I0218 01:21:35.579273 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3b76cd8-2d23-4fd9-8647-48f4de618f01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3b76cd8-2d23-4fd9-8647-48f4de618f01" (UID: "e3b76cd8-2d23-4fd9-8647-48f4de618f01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:21:35 crc kubenswrapper[4858]: I0218 01:21:35.613958 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3b76cd8-2d23-4fd9-8647-48f4de618f01-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:21:35 crc kubenswrapper[4858]: I0218 01:21:35.613999 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwf4s\" (UniqueName: \"kubernetes.io/projected/e3b76cd8-2d23-4fd9-8647-48f4de618f01-kube-api-access-nwf4s\") on node \"crc\" DevicePath \"\"" Feb 18 01:21:35 crc kubenswrapper[4858]: I0218 01:21:35.614010 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3b76cd8-2d23-4fd9-8647-48f4de618f01-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:21:35 crc kubenswrapper[4858]: I0218 01:21:35.717696 4858 generic.go:334] "Generic (PLEG): container finished" podID="e3b76cd8-2d23-4fd9-8647-48f4de618f01" containerID="68cfde9f3171bf59429b32c6866413d6e381b3c38eaab0ee0ac7e2a9c0048f07" exitCode=0 Feb 18 01:21:35 crc kubenswrapper[4858]: I0218 01:21:35.717737 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xd57t" event={"ID":"e3b76cd8-2d23-4fd9-8647-48f4de618f01","Type":"ContainerDied","Data":"68cfde9f3171bf59429b32c6866413d6e381b3c38eaab0ee0ac7e2a9c0048f07"} Feb 18 01:21:35 crc kubenswrapper[4858]: I0218 01:21:35.717762 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xd57t" event={"ID":"e3b76cd8-2d23-4fd9-8647-48f4de618f01","Type":"ContainerDied","Data":"3e6137daa748fddc43b14097dc62ec0f6738e3179c537fa2099950337f07e493"} Feb 18 01:21:35 crc kubenswrapper[4858]: I0218 01:21:35.717779 4858 scope.go:117] "RemoveContainer" containerID="68cfde9f3171bf59429b32c6866413d6e381b3c38eaab0ee0ac7e2a9c0048f07" Feb 18 01:21:35 crc kubenswrapper[4858]: I0218 01:21:35.717828 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xd57t" Feb 18 01:21:35 crc kubenswrapper[4858]: I0218 01:21:35.756187 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xd57t"] Feb 18 01:21:35 crc kubenswrapper[4858]: I0218 01:21:35.761632 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xd57t"] Feb 18 01:21:35 crc kubenswrapper[4858]: I0218 01:21:35.764779 4858 scope.go:117] "RemoveContainer" containerID="eff3eee0a4fa36621ed9175a2d753a40d1c8287d98fbcc01f6b70329332029a7" Feb 18 01:21:35 crc kubenswrapper[4858]: I0218 01:21:35.800662 4858 scope.go:117] "RemoveContainer" containerID="9d8f5f2126a201ac863a321912a8c307bfee941c885819b4baf459eb39cf04a7" Feb 18 01:21:35 crc kubenswrapper[4858]: I0218 01:21:35.855930 4858 scope.go:117] "RemoveContainer" containerID="68cfde9f3171bf59429b32c6866413d6e381b3c38eaab0ee0ac7e2a9c0048f07" Feb 18 01:21:35 crc kubenswrapper[4858]: E0218 01:21:35.856483 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68cfde9f3171bf59429b32c6866413d6e381b3c38eaab0ee0ac7e2a9c0048f07\": container with ID starting with 68cfde9f3171bf59429b32c6866413d6e381b3c38eaab0ee0ac7e2a9c0048f07 not found: ID does not exist" containerID="68cfde9f3171bf59429b32c6866413d6e381b3c38eaab0ee0ac7e2a9c0048f07" Feb 18 01:21:35 crc kubenswrapper[4858]: I0218 01:21:35.856564 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68cfde9f3171bf59429b32c6866413d6e381b3c38eaab0ee0ac7e2a9c0048f07"} err="failed to get container status \"68cfde9f3171bf59429b32c6866413d6e381b3c38eaab0ee0ac7e2a9c0048f07\": rpc error: code = NotFound desc = could not find container \"68cfde9f3171bf59429b32c6866413d6e381b3c38eaab0ee0ac7e2a9c0048f07\": container with ID starting with 68cfde9f3171bf59429b32c6866413d6e381b3c38eaab0ee0ac7e2a9c0048f07 not found: ID does not exist" Feb 18 01:21:35 crc kubenswrapper[4858]: I0218 01:21:35.856605 4858 scope.go:117] "RemoveContainer" containerID="eff3eee0a4fa36621ed9175a2d753a40d1c8287d98fbcc01f6b70329332029a7" Feb 18 01:21:35 crc kubenswrapper[4858]: E0218 01:21:35.857034 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eff3eee0a4fa36621ed9175a2d753a40d1c8287d98fbcc01f6b70329332029a7\": container with ID starting with eff3eee0a4fa36621ed9175a2d753a40d1c8287d98fbcc01f6b70329332029a7 not found: ID does not exist" containerID="eff3eee0a4fa36621ed9175a2d753a40d1c8287d98fbcc01f6b70329332029a7" Feb 18 01:21:35 crc kubenswrapper[4858]: I0218 01:21:35.857067 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eff3eee0a4fa36621ed9175a2d753a40d1c8287d98fbcc01f6b70329332029a7"} err="failed to get container status \"eff3eee0a4fa36621ed9175a2d753a40d1c8287d98fbcc01f6b70329332029a7\": rpc error: code = NotFound desc = could not find container \"eff3eee0a4fa36621ed9175a2d753a40d1c8287d98fbcc01f6b70329332029a7\": container with ID starting with eff3eee0a4fa36621ed9175a2d753a40d1c8287d98fbcc01f6b70329332029a7 not found: ID does not exist" Feb 18 01:21:35 crc kubenswrapper[4858]: I0218 01:21:35.857087 4858 scope.go:117] "RemoveContainer" containerID="9d8f5f2126a201ac863a321912a8c307bfee941c885819b4baf459eb39cf04a7" Feb 18 01:21:35 crc kubenswrapper[4858]: E0218 01:21:35.857329 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d8f5f2126a201ac863a321912a8c307bfee941c885819b4baf459eb39cf04a7\": container with ID starting with 9d8f5f2126a201ac863a321912a8c307bfee941c885819b4baf459eb39cf04a7 not found: ID does not exist" containerID="9d8f5f2126a201ac863a321912a8c307bfee941c885819b4baf459eb39cf04a7" Feb 18 01:21:35 crc kubenswrapper[4858]: I0218 01:21:35.857370 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d8f5f2126a201ac863a321912a8c307bfee941c885819b4baf459eb39cf04a7"} err="failed to get container status \"9d8f5f2126a201ac863a321912a8c307bfee941c885819b4baf459eb39cf04a7\": rpc error: code = NotFound desc = could not find container \"9d8f5f2126a201ac863a321912a8c307bfee941c885819b4baf459eb39cf04a7\": container with ID starting with 9d8f5f2126a201ac863a321912a8c307bfee941c885819b4baf459eb39cf04a7 not found: ID does not exist" Feb 18 01:21:37 crc kubenswrapper[4858]: I0218 01:21:37.438722 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3b76cd8-2d23-4fd9-8647-48f4de618f01" path="/var/lib/kubelet/pods/e3b76cd8-2d23-4fd9-8647-48f4de618f01/volumes" Feb 18 01:21:38 crc kubenswrapper[4858]: E0218 01:21:38.423119 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:21:47 crc kubenswrapper[4858]: E0218 01:21:47.427346 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:21:51 crc kubenswrapper[4858]: E0218 01:21:51.424281 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:21:55 crc kubenswrapper[4858]: I0218 01:21:55.265367 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:21:55 crc kubenswrapper[4858]: I0218 01:21:55.265813 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:22:02 crc kubenswrapper[4858]: I0218 01:22:02.045792 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq"] Feb 18 01:22:02 crc kubenswrapper[4858]: E0218 01:22:02.047012 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b76cd8-2d23-4fd9-8647-48f4de618f01" containerName="extract-content" Feb 18 01:22:02 crc kubenswrapper[4858]: I0218 01:22:02.047032 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b76cd8-2d23-4fd9-8647-48f4de618f01" containerName="extract-content" Feb 18 01:22:02 crc kubenswrapper[4858]: E0218 01:22:02.047064 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b76cd8-2d23-4fd9-8647-48f4de618f01" containerName="extract-utilities" Feb 18 01:22:02 crc kubenswrapper[4858]: I0218 01:22:02.047072 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b76cd8-2d23-4fd9-8647-48f4de618f01" containerName="extract-utilities" Feb 18 01:22:02 crc kubenswrapper[4858]: E0218 01:22:02.047101 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b76cd8-2d23-4fd9-8647-48f4de618f01" containerName="registry-server" Feb 18 01:22:02 crc kubenswrapper[4858]: I0218 01:22:02.047111 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b76cd8-2d23-4fd9-8647-48f4de618f01" containerName="registry-server" Feb 18 01:22:02 crc kubenswrapper[4858]: E0218 01:22:02.047125 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13898d25-206e-4010-9f2f-54546c48aee6" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 01:22:02 crc kubenswrapper[4858]: I0218 01:22:02.047134 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="13898d25-206e-4010-9f2f-54546c48aee6" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 01:22:02 crc kubenswrapper[4858]: I0218 01:22:02.047423 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="13898d25-206e-4010-9f2f-54546c48aee6" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 01:22:02 crc kubenswrapper[4858]: I0218 01:22:02.047441 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b76cd8-2d23-4fd9-8647-48f4de618f01" containerName="registry-server" Feb 18 01:22:02 crc kubenswrapper[4858]: I0218 01:22:02.048376 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq" Feb 18 01:22:02 crc kubenswrapper[4858]: I0218 01:22:02.051950 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 01:22:02 crc kubenswrapper[4858]: I0218 01:22:02.052339 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 01:22:02 crc kubenswrapper[4858]: I0218 01:22:02.052710 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ts27" Feb 18 01:22:02 crc kubenswrapper[4858]: I0218 01:22:02.052955 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 01:22:02 crc kubenswrapper[4858]: I0218 01:22:02.059097 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq"] Feb 18 01:22:02 crc kubenswrapper[4858]: I0218 01:22:02.176690 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b76d04a7-6eb2-4a9a-8934-ff0cea670d77-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq\" (UID: \"b76d04a7-6eb2-4a9a-8934-ff0cea670d77\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq" Feb 18 01:22:02 crc kubenswrapper[4858]: I0218 01:22:02.176932 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b76d04a7-6eb2-4a9a-8934-ff0cea670d77-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq\" (UID: \"b76d04a7-6eb2-4a9a-8934-ff0cea670d77\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq" Feb 18 01:22:02 crc kubenswrapper[4858]: I0218 01:22:02.177042 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptwgf\" (UniqueName: \"kubernetes.io/projected/b76d04a7-6eb2-4a9a-8934-ff0cea670d77-kube-api-access-ptwgf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq\" (UID: \"b76d04a7-6eb2-4a9a-8934-ff0cea670d77\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq" Feb 18 01:22:02 crc kubenswrapper[4858]: I0218 01:22:02.279479 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b76d04a7-6eb2-4a9a-8934-ff0cea670d77-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq\" (UID: \"b76d04a7-6eb2-4a9a-8934-ff0cea670d77\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq" Feb 18 01:22:02 crc kubenswrapper[4858]: I0218 01:22:02.279594 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptwgf\" (UniqueName: \"kubernetes.io/projected/b76d04a7-6eb2-4a9a-8934-ff0cea670d77-kube-api-access-ptwgf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq\" (UID: \"b76d04a7-6eb2-4a9a-8934-ff0cea670d77\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq" Feb 18 01:22:02 crc kubenswrapper[4858]: I0218 01:22:02.279745 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b76d04a7-6eb2-4a9a-8934-ff0cea670d77-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq\" (UID: \"b76d04a7-6eb2-4a9a-8934-ff0cea670d77\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq" Feb 18 01:22:02 crc kubenswrapper[4858]: I0218 01:22:02.295457 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b76d04a7-6eb2-4a9a-8934-ff0cea670d77-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq\" (UID: \"b76d04a7-6eb2-4a9a-8934-ff0cea670d77\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq" Feb 18 01:22:02 crc kubenswrapper[4858]: I0218 01:22:02.296435 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b76d04a7-6eb2-4a9a-8934-ff0cea670d77-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq\" (UID: \"b76d04a7-6eb2-4a9a-8934-ff0cea670d77\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq" Feb 18 01:22:02 crc kubenswrapper[4858]: I0218 01:22:02.300411 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptwgf\" (UniqueName: \"kubernetes.io/projected/b76d04a7-6eb2-4a9a-8934-ff0cea670d77-kube-api-access-ptwgf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq\" (UID: \"b76d04a7-6eb2-4a9a-8934-ff0cea670d77\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq" Feb 18 01:22:02 crc kubenswrapper[4858]: I0218 01:22:02.385659 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq" Feb 18 01:22:02 crc kubenswrapper[4858]: E0218 01:22:02.422984 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:22:02 crc kubenswrapper[4858]: I0218 01:22:02.998968 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq"] Feb 18 01:22:03 crc kubenswrapper[4858]: W0218 01:22:03.001742 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb76d04a7_6eb2_4a9a_8934_ff0cea670d77.slice/crio-86e71319361e7f7e4d5aaaa0059fb483f4b0f5c92fcada4c2da7c50321f5aa65 WatchSource:0}: Error finding container 86e71319361e7f7e4d5aaaa0059fb483f4b0f5c92fcada4c2da7c50321f5aa65: Status 404 returned error can't find the container with id 86e71319361e7f7e4d5aaaa0059fb483f4b0f5c92fcada4c2da7c50321f5aa65 Feb 18 01:22:03 crc kubenswrapper[4858]: I0218 01:22:03.041944 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq" event={"ID":"b76d04a7-6eb2-4a9a-8934-ff0cea670d77","Type":"ContainerStarted","Data":"86e71319361e7f7e4d5aaaa0059fb483f4b0f5c92fcada4c2da7c50321f5aa65"} Feb 18 01:22:04 crc kubenswrapper[4858]: I0218 01:22:04.067999 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq" event={"ID":"b76d04a7-6eb2-4a9a-8934-ff0cea670d77","Type":"ContainerStarted","Data":"27e3d2d79f3812eced35fa3df57a528c5ed7cebec425d416103e7196a77dd912"} Feb 18 01:22:04 crc kubenswrapper[4858]: I0218 01:22:04.091616 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq" podStartSLOduration=1.521375617 podStartE2EDuration="2.091598842s" podCreationTimestamp="2026-02-18 01:22:02 +0000 UTC" firstStartedPulling="2026-02-18 01:22:03.003434346 +0000 UTC m=+2876.309271078" lastFinishedPulling="2026-02-18 01:22:03.573657561 +0000 UTC m=+2876.879494303" observedRunningTime="2026-02-18 01:22:04.089083552 +0000 UTC m=+2877.394920294" watchObservedRunningTime="2026-02-18 01:22:04.091598842 +0000 UTC m=+2877.397435584" Feb 18 01:22:04 crc kubenswrapper[4858]: E0218 01:22:04.421853 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:22:14 crc kubenswrapper[4858]: E0218 01:22:14.422271 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:22:19 crc kubenswrapper[4858]: E0218 01:22:19.423999 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:22:20 crc kubenswrapper[4858]: I0218 01:22:20.933183 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vbq4h"] Feb 18 01:22:20 crc kubenswrapper[4858]: I0218 01:22:20.936127 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vbq4h" Feb 18 01:22:20 crc kubenswrapper[4858]: I0218 01:22:20.943657 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vbq4h"] Feb 18 01:22:21 crc kubenswrapper[4858]: I0218 01:22:21.011117 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmlhm\" (UniqueName: \"kubernetes.io/projected/d0796b11-5075-422e-aeeb-2e186e20dc86-kube-api-access-qmlhm\") pod \"redhat-marketplace-vbq4h\" (UID: \"d0796b11-5075-422e-aeeb-2e186e20dc86\") " pod="openshift-marketplace/redhat-marketplace-vbq4h" Feb 18 01:22:21 crc kubenswrapper[4858]: I0218 01:22:21.011198 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0796b11-5075-422e-aeeb-2e186e20dc86-catalog-content\") pod \"redhat-marketplace-vbq4h\" (UID: \"d0796b11-5075-422e-aeeb-2e186e20dc86\") " pod="openshift-marketplace/redhat-marketplace-vbq4h" Feb 18 01:22:21 crc kubenswrapper[4858]: I0218 01:22:21.011239 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0796b11-5075-422e-aeeb-2e186e20dc86-utilities\") pod \"redhat-marketplace-vbq4h\" (UID: \"d0796b11-5075-422e-aeeb-2e186e20dc86\") " pod="openshift-marketplace/redhat-marketplace-vbq4h" Feb 18 01:22:21 crc kubenswrapper[4858]: I0218 01:22:21.113747 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0796b11-5075-422e-aeeb-2e186e20dc86-catalog-content\") pod \"redhat-marketplace-vbq4h\" (UID: \"d0796b11-5075-422e-aeeb-2e186e20dc86\") " pod="openshift-marketplace/redhat-marketplace-vbq4h" Feb 18 01:22:21 crc kubenswrapper[4858]: I0218 01:22:21.113817 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0796b11-5075-422e-aeeb-2e186e20dc86-utilities\") pod \"redhat-marketplace-vbq4h\" (UID: \"d0796b11-5075-422e-aeeb-2e186e20dc86\") " pod="openshift-marketplace/redhat-marketplace-vbq4h" Feb 18 01:22:21 crc kubenswrapper[4858]: I0218 01:22:21.113958 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmlhm\" (UniqueName: \"kubernetes.io/projected/d0796b11-5075-422e-aeeb-2e186e20dc86-kube-api-access-qmlhm\") pod \"redhat-marketplace-vbq4h\" (UID: \"d0796b11-5075-422e-aeeb-2e186e20dc86\") " pod="openshift-marketplace/redhat-marketplace-vbq4h" Feb 18 01:22:21 crc kubenswrapper[4858]: I0218 01:22:21.114674 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0796b11-5075-422e-aeeb-2e186e20dc86-utilities\") pod \"redhat-marketplace-vbq4h\" (UID: \"d0796b11-5075-422e-aeeb-2e186e20dc86\") " pod="openshift-marketplace/redhat-marketplace-vbq4h" Feb 18 01:22:21 crc kubenswrapper[4858]: I0218 01:22:21.114938 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0796b11-5075-422e-aeeb-2e186e20dc86-catalog-content\") pod \"redhat-marketplace-vbq4h\" (UID: \"d0796b11-5075-422e-aeeb-2e186e20dc86\") " pod="openshift-marketplace/redhat-marketplace-vbq4h" Feb 18 01:22:21 crc kubenswrapper[4858]: I0218 01:22:21.143950 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmlhm\" (UniqueName: \"kubernetes.io/projected/d0796b11-5075-422e-aeeb-2e186e20dc86-kube-api-access-qmlhm\") pod \"redhat-marketplace-vbq4h\" (UID: \"d0796b11-5075-422e-aeeb-2e186e20dc86\") " pod="openshift-marketplace/redhat-marketplace-vbq4h" Feb 18 01:22:21 crc kubenswrapper[4858]: I0218 01:22:21.322024 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vbq4h" Feb 18 01:22:21 crc kubenswrapper[4858]: I0218 01:22:21.860069 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vbq4h"] Feb 18 01:22:22 crc kubenswrapper[4858]: I0218 01:22:22.285984 4858 generic.go:334] "Generic (PLEG): container finished" podID="d0796b11-5075-422e-aeeb-2e186e20dc86" containerID="958720453dc99149d707fa53500925ee5d2aa768ce9a1208d005204dc3d0ad59" exitCode=0 Feb 18 01:22:22 crc kubenswrapper[4858]: I0218 01:22:22.286125 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbq4h" event={"ID":"d0796b11-5075-422e-aeeb-2e186e20dc86","Type":"ContainerDied","Data":"958720453dc99149d707fa53500925ee5d2aa768ce9a1208d005204dc3d0ad59"} Feb 18 01:22:22 crc kubenswrapper[4858]: I0218 01:22:22.286356 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbq4h" event={"ID":"d0796b11-5075-422e-aeeb-2e186e20dc86","Type":"ContainerStarted","Data":"8208d3934cf3bd238659f2e9ccc1fada8c945ac7dd40458df3574353d5bef627"} Feb 18 01:22:23 crc kubenswrapper[4858]: I0218 01:22:23.297195 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbq4h" event={"ID":"d0796b11-5075-422e-aeeb-2e186e20dc86","Type":"ContainerStarted","Data":"027f4e58d43648eec3fa955ba08221f832ddd99ea6de1fb7d6d77a445e131dac"} Feb 18 01:22:24 crc kubenswrapper[4858]: I0218 01:22:24.309674 4858 generic.go:334] "Generic (PLEG): container finished" podID="d0796b11-5075-422e-aeeb-2e186e20dc86" containerID="027f4e58d43648eec3fa955ba08221f832ddd99ea6de1fb7d6d77a445e131dac" exitCode=0 Feb 18 01:22:24 crc kubenswrapper[4858]: I0218 01:22:24.309715 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbq4h" event={"ID":"d0796b11-5075-422e-aeeb-2e186e20dc86","Type":"ContainerDied","Data":"027f4e58d43648eec3fa955ba08221f832ddd99ea6de1fb7d6d77a445e131dac"} Feb 18 01:22:25 crc kubenswrapper[4858]: I0218 01:22:25.265242 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:22:25 crc kubenswrapper[4858]: I0218 01:22:25.265737 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:22:25 crc kubenswrapper[4858]: I0218 01:22:25.324046 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbq4h" event={"ID":"d0796b11-5075-422e-aeeb-2e186e20dc86","Type":"ContainerStarted","Data":"736d241f1495e2b3a05bd51db38bfe81926ba28a23f5d9ff85349e4e9a070bcc"} Feb 18 01:22:25 crc kubenswrapper[4858]: I0218 01:22:25.359118 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vbq4h" podStartSLOduration=2.745553566 podStartE2EDuration="5.359101775s" podCreationTimestamp="2026-02-18 01:22:20 +0000 UTC" firstStartedPulling="2026-02-18 01:22:22.287810705 +0000 UTC m=+2895.593647437" lastFinishedPulling="2026-02-18 01:22:24.901358914 +0000 UTC m=+2898.207195646" observedRunningTime="2026-02-18 01:22:25.357168949 +0000 UTC m=+2898.663005691" watchObservedRunningTime="2026-02-18 01:22:25.359101775 +0000 UTC m=+2898.664938517" Feb 18 01:22:27 crc kubenswrapper[4858]: E0218 01:22:27.429156 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:22:31 crc kubenswrapper[4858]: I0218 01:22:31.323125 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vbq4h" Feb 18 01:22:31 crc kubenswrapper[4858]: I0218 01:22:31.323965 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vbq4h" Feb 18 01:22:31 crc kubenswrapper[4858]: I0218 01:22:31.397900 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vbq4h" Feb 18 01:22:31 crc kubenswrapper[4858]: I0218 01:22:31.469822 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vbq4h" Feb 18 01:22:34 crc kubenswrapper[4858]: E0218 01:22:34.421872 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:22:34 crc kubenswrapper[4858]: I0218 01:22:34.897489 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vbq4h"] Feb 18 01:22:34 crc kubenswrapper[4858]: I0218 01:22:34.898056 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vbq4h" podUID="d0796b11-5075-422e-aeeb-2e186e20dc86" containerName="registry-server" containerID="cri-o://736d241f1495e2b3a05bd51db38bfe81926ba28a23f5d9ff85349e4e9a070bcc" gracePeriod=2 Feb 18 01:22:35 crc kubenswrapper[4858]: I0218 01:22:35.440929 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vbq4h" Feb 18 01:22:35 crc kubenswrapper[4858]: I0218 01:22:35.442671 4858 generic.go:334] "Generic (PLEG): container finished" podID="d0796b11-5075-422e-aeeb-2e186e20dc86" containerID="736d241f1495e2b3a05bd51db38bfe81926ba28a23f5d9ff85349e4e9a070bcc" exitCode=0 Feb 18 01:22:35 crc kubenswrapper[4858]: I0218 01:22:35.442710 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbq4h" event={"ID":"d0796b11-5075-422e-aeeb-2e186e20dc86","Type":"ContainerDied","Data":"736d241f1495e2b3a05bd51db38bfe81926ba28a23f5d9ff85349e4e9a070bcc"} Feb 18 01:22:35 crc kubenswrapper[4858]: I0218 01:22:35.442736 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbq4h" event={"ID":"d0796b11-5075-422e-aeeb-2e186e20dc86","Type":"ContainerDied","Data":"8208d3934cf3bd238659f2e9ccc1fada8c945ac7dd40458df3574353d5bef627"} Feb 18 01:22:35 crc kubenswrapper[4858]: I0218 01:22:35.442756 4858 scope.go:117] "RemoveContainer" containerID="736d241f1495e2b3a05bd51db38bfe81926ba28a23f5d9ff85349e4e9a070bcc" Feb 18 01:22:35 crc kubenswrapper[4858]: I0218 01:22:35.473234 4858 scope.go:117] "RemoveContainer" containerID="027f4e58d43648eec3fa955ba08221f832ddd99ea6de1fb7d6d77a445e131dac" Feb 18 01:22:35 crc kubenswrapper[4858]: I0218 01:22:35.503604 4858 scope.go:117] "RemoveContainer" containerID="958720453dc99149d707fa53500925ee5d2aa768ce9a1208d005204dc3d0ad59" Feb 18 01:22:35 crc kubenswrapper[4858]: I0218 01:22:35.538639 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0796b11-5075-422e-aeeb-2e186e20dc86-utilities\") pod \"d0796b11-5075-422e-aeeb-2e186e20dc86\" (UID: \"d0796b11-5075-422e-aeeb-2e186e20dc86\") " Feb 18 01:22:35 crc kubenswrapper[4858]: I0218 01:22:35.538773 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0796b11-5075-422e-aeeb-2e186e20dc86-catalog-content\") pod \"d0796b11-5075-422e-aeeb-2e186e20dc86\" (UID: \"d0796b11-5075-422e-aeeb-2e186e20dc86\") " Feb 18 01:22:35 crc kubenswrapper[4858]: I0218 01:22:35.538862 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmlhm\" (UniqueName: \"kubernetes.io/projected/d0796b11-5075-422e-aeeb-2e186e20dc86-kube-api-access-qmlhm\") pod \"d0796b11-5075-422e-aeeb-2e186e20dc86\" (UID: \"d0796b11-5075-422e-aeeb-2e186e20dc86\") " Feb 18 01:22:35 crc kubenswrapper[4858]: I0218 01:22:35.540484 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0796b11-5075-422e-aeeb-2e186e20dc86-utilities" (OuterVolumeSpecName: "utilities") pod "d0796b11-5075-422e-aeeb-2e186e20dc86" (UID: "d0796b11-5075-422e-aeeb-2e186e20dc86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:22:35 crc kubenswrapper[4858]: I0218 01:22:35.545170 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0796b11-5075-422e-aeeb-2e186e20dc86-kube-api-access-qmlhm" (OuterVolumeSpecName: "kube-api-access-qmlhm") pod "d0796b11-5075-422e-aeeb-2e186e20dc86" (UID: "d0796b11-5075-422e-aeeb-2e186e20dc86"). InnerVolumeSpecName "kube-api-access-qmlhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:22:35 crc kubenswrapper[4858]: I0218 01:22:35.565199 4858 scope.go:117] "RemoveContainer" containerID="736d241f1495e2b3a05bd51db38bfe81926ba28a23f5d9ff85349e4e9a070bcc" Feb 18 01:22:35 crc kubenswrapper[4858]: E0218 01:22:35.565733 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"736d241f1495e2b3a05bd51db38bfe81926ba28a23f5d9ff85349e4e9a070bcc\": container with ID starting with 736d241f1495e2b3a05bd51db38bfe81926ba28a23f5d9ff85349e4e9a070bcc not found: ID does not exist" containerID="736d241f1495e2b3a05bd51db38bfe81926ba28a23f5d9ff85349e4e9a070bcc" Feb 18 01:22:35 crc kubenswrapper[4858]: I0218 01:22:35.565798 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"736d241f1495e2b3a05bd51db38bfe81926ba28a23f5d9ff85349e4e9a070bcc"} err="failed to get container status \"736d241f1495e2b3a05bd51db38bfe81926ba28a23f5d9ff85349e4e9a070bcc\": rpc error: code = NotFound desc = could not find container \"736d241f1495e2b3a05bd51db38bfe81926ba28a23f5d9ff85349e4e9a070bcc\": container with ID starting with 736d241f1495e2b3a05bd51db38bfe81926ba28a23f5d9ff85349e4e9a070bcc not found: ID does not exist" Feb 18 01:22:35 crc kubenswrapper[4858]: I0218 01:22:35.565833 4858 scope.go:117] "RemoveContainer" containerID="027f4e58d43648eec3fa955ba08221f832ddd99ea6de1fb7d6d77a445e131dac" Feb 18 01:22:35 crc kubenswrapper[4858]: E0218 01:22:35.566184 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"027f4e58d43648eec3fa955ba08221f832ddd99ea6de1fb7d6d77a445e131dac\": container with ID starting with 027f4e58d43648eec3fa955ba08221f832ddd99ea6de1fb7d6d77a445e131dac not found: ID does not exist" containerID="027f4e58d43648eec3fa955ba08221f832ddd99ea6de1fb7d6d77a445e131dac" Feb 18 01:22:35 crc kubenswrapper[4858]: I0218 01:22:35.566211 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"027f4e58d43648eec3fa955ba08221f832ddd99ea6de1fb7d6d77a445e131dac"} err="failed to get container status \"027f4e58d43648eec3fa955ba08221f832ddd99ea6de1fb7d6d77a445e131dac\": rpc error: code = NotFound desc = could not find container \"027f4e58d43648eec3fa955ba08221f832ddd99ea6de1fb7d6d77a445e131dac\": container with ID starting with 027f4e58d43648eec3fa955ba08221f832ddd99ea6de1fb7d6d77a445e131dac not found: ID does not exist" Feb 18 01:22:35 crc kubenswrapper[4858]: I0218 01:22:35.566229 4858 scope.go:117] "RemoveContainer" containerID="958720453dc99149d707fa53500925ee5d2aa768ce9a1208d005204dc3d0ad59" Feb 18 01:22:35 crc kubenswrapper[4858]: I0218 01:22:35.566348 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0796b11-5075-422e-aeeb-2e186e20dc86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0796b11-5075-422e-aeeb-2e186e20dc86" (UID: "d0796b11-5075-422e-aeeb-2e186e20dc86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:22:35 crc kubenswrapper[4858]: E0218 01:22:35.566575 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"958720453dc99149d707fa53500925ee5d2aa768ce9a1208d005204dc3d0ad59\": container with ID starting with 958720453dc99149d707fa53500925ee5d2aa768ce9a1208d005204dc3d0ad59 not found: ID does not exist" containerID="958720453dc99149d707fa53500925ee5d2aa768ce9a1208d005204dc3d0ad59" Feb 18 01:22:35 crc kubenswrapper[4858]: I0218 01:22:35.566605 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"958720453dc99149d707fa53500925ee5d2aa768ce9a1208d005204dc3d0ad59"} err="failed to get container status \"958720453dc99149d707fa53500925ee5d2aa768ce9a1208d005204dc3d0ad59\": rpc error: code = NotFound desc = could not find container \"958720453dc99149d707fa53500925ee5d2aa768ce9a1208d005204dc3d0ad59\": container with ID starting with 958720453dc99149d707fa53500925ee5d2aa768ce9a1208d005204dc3d0ad59 not found: ID does not exist" Feb 18 01:22:35 crc kubenswrapper[4858]: I0218 01:22:35.641320 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0796b11-5075-422e-aeeb-2e186e20dc86-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:22:35 crc kubenswrapper[4858]: I0218 01:22:35.641450 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0796b11-5075-422e-aeeb-2e186e20dc86-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:22:35 crc kubenswrapper[4858]: I0218 01:22:35.641465 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmlhm\" (UniqueName: \"kubernetes.io/projected/d0796b11-5075-422e-aeeb-2e186e20dc86-kube-api-access-qmlhm\") on node \"crc\" DevicePath \"\"" Feb 18 01:22:36 crc kubenswrapper[4858]: I0218 01:22:36.453959 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vbq4h" Feb 18 01:22:36 crc kubenswrapper[4858]: I0218 01:22:36.498092 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vbq4h"] Feb 18 01:22:36 crc kubenswrapper[4858]: I0218 01:22:36.506455 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vbq4h"] Feb 18 01:22:37 crc kubenswrapper[4858]: I0218 01:22:37.429413 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0796b11-5075-422e-aeeb-2e186e20dc86" path="/var/lib/kubelet/pods/d0796b11-5075-422e-aeeb-2e186e20dc86/volumes" Feb 18 01:22:42 crc kubenswrapper[4858]: E0218 01:22:42.422694 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:22:49 crc kubenswrapper[4858]: E0218 01:22:49.423634 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:22:55 crc kubenswrapper[4858]: I0218 01:22:55.265593 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:22:55 crc kubenswrapper[4858]: I0218 01:22:55.266123 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:22:55 crc kubenswrapper[4858]: I0218 01:22:55.266176 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 01:22:55 crc kubenswrapper[4858]: I0218 01:22:55.267000 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664"} pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 01:22:55 crc kubenswrapper[4858]: I0218 01:22:55.267059 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" containerID="cri-o://e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" gracePeriod=600 Feb 18 01:22:55 crc kubenswrapper[4858]: E0218 01:22:55.389559 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:22:55 crc kubenswrapper[4858]: I0218 01:22:55.686707 4858 generic.go:334] "Generic (PLEG): container finished" podID="7172df49-6116-4968-a2b5-a1afb116568b" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" exitCode=0 Feb 18 01:22:55 crc kubenswrapper[4858]: I0218 01:22:55.687038 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerDied","Data":"e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664"} Feb 18 01:22:55 crc kubenswrapper[4858]: I0218 01:22:55.687077 4858 scope.go:117] "RemoveContainer" containerID="313584f84c2f9b206f79e2d3fe4418ef5be7b70b242870da3851fa358bfd4f44" Feb 18 01:22:55 crc kubenswrapper[4858]: I0218 01:22:55.687769 4858 scope.go:117] "RemoveContainer" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" Feb 18 01:22:55 crc kubenswrapper[4858]: E0218 01:22:55.688004 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:22:56 crc kubenswrapper[4858]: E0218 01:22:56.420850 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:23:03 crc kubenswrapper[4858]: E0218 01:23:03.422641 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:23:08 crc kubenswrapper[4858]: I0218 01:23:08.421105 4858 scope.go:117] "RemoveContainer" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" Feb 18 01:23:08 crc kubenswrapper[4858]: E0218 01:23:08.422275 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:23:11 crc kubenswrapper[4858]: I0218 01:23:11.422282 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 01:23:11 crc kubenswrapper[4858]: E0218 01:23:11.554554 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 01:23:11 crc kubenswrapper[4858]: E0218 01:23:11.554697 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 01:23:11 crc kubenswrapper[4858]: E0218 01:23:11.554958 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t22t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-h2mps_openstack(8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:23:11 crc kubenswrapper[4858]: E0218 01:23:11.556683 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:23:17 crc kubenswrapper[4858]: E0218 01:23:17.562803 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:23:17 crc kubenswrapper[4858]: E0218 01:23:17.563324 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:23:17 crc kubenswrapper[4858]: E0218 01:23:17.563460 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n75h65dh56ch8chbh597h687h696h57bhdchcbhb6h9h648hb6h686h666h57ch557h55ch68ch76h686h5f7hb7hc6h68hdh67fh8bhd7hbcq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6qb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(1b28954c-8d35-4f43-a44b-307a56f6fff5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:23:17 crc kubenswrapper[4858]: E0218 01:23:17.564555 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:23:20 crc kubenswrapper[4858]: I0218 01:23:20.420003 4858 scope.go:117] "RemoveContainer" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" Feb 18 01:23:20 crc kubenswrapper[4858]: E0218 01:23:20.420547 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:23:26 crc kubenswrapper[4858]: E0218 01:23:26.422242 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:23:31 crc kubenswrapper[4858]: E0218 01:23:31.424040 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:23:34 crc kubenswrapper[4858]: I0218 01:23:34.420057 4858 scope.go:117] "RemoveContainer" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" Feb 18 01:23:34 crc kubenswrapper[4858]: E0218 01:23:34.421353 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:23:40 crc kubenswrapper[4858]: E0218 01:23:40.421797 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:23:46 crc kubenswrapper[4858]: I0218 01:23:46.420196 4858 scope.go:117] "RemoveContainer" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" Feb 18 01:23:46 crc kubenswrapper[4858]: E0218 01:23:46.421223 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:23:46 crc kubenswrapper[4858]: E0218 01:23:46.422461 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:23:54 crc kubenswrapper[4858]: E0218 01:23:54.420999 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:24:00 crc kubenswrapper[4858]: I0218 01:24:00.422327 4858 scope.go:117] "RemoveContainer" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" Feb 18 01:24:00 crc kubenswrapper[4858]: E0218 01:24:00.422808 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:24:00 crc kubenswrapper[4858]: E0218 01:24:00.423352 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:24:08 crc kubenswrapper[4858]: E0218 01:24:08.424004 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:24:12 crc kubenswrapper[4858]: I0218 01:24:12.419459 4858 scope.go:117] "RemoveContainer" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" Feb 18 01:24:12 crc kubenswrapper[4858]: E0218 01:24:12.420040 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:24:12 crc kubenswrapper[4858]: E0218 01:24:12.421646 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:24:22 crc kubenswrapper[4858]: E0218 01:24:22.422576 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:24:24 crc kubenswrapper[4858]: I0218 01:24:24.420040 4858 scope.go:117] "RemoveContainer" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" Feb 18 01:24:24 crc kubenswrapper[4858]: E0218 01:24:24.420510 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:24:27 crc kubenswrapper[4858]: E0218 01:24:27.422413 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:24:35 crc kubenswrapper[4858]: I0218 01:24:35.420411 4858 scope.go:117] "RemoveContainer" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" Feb 18 01:24:35 crc kubenswrapper[4858]: E0218 01:24:35.422506 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:24:35 crc kubenswrapper[4858]: E0218 01:24:35.422533 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:24:41 crc kubenswrapper[4858]: E0218 01:24:41.423904 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:24:48 crc kubenswrapper[4858]: I0218 01:24:48.419877 4858 scope.go:117] "RemoveContainer" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" Feb 18 01:24:48 crc kubenswrapper[4858]: E0218 01:24:48.420814 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:24:49 crc kubenswrapper[4858]: E0218 01:24:49.422439 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:24:52 crc kubenswrapper[4858]: E0218 01:24:52.422295 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:25:00 crc kubenswrapper[4858]: I0218 01:25:00.419344 4858 scope.go:117] "RemoveContainer" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" Feb 18 01:25:00 crc kubenswrapper[4858]: E0218 01:25:00.420372 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:25:03 crc kubenswrapper[4858]: E0218 01:25:03.422861 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:25:05 crc kubenswrapper[4858]: E0218 01:25:05.422992 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:25:11 crc kubenswrapper[4858]: I0218 01:25:11.420106 4858 scope.go:117] "RemoveContainer" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" Feb 18 01:25:11 crc kubenswrapper[4858]: E0218 01:25:11.421417 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:25:14 crc kubenswrapper[4858]: E0218 01:25:14.421718 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:25:16 crc kubenswrapper[4858]: E0218 01:25:16.423373 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:25:22 crc kubenswrapper[4858]: I0218 01:25:22.420638 4858 scope.go:117] "RemoveContainer" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" Feb 18 01:25:22 crc kubenswrapper[4858]: E0218 01:25:22.421548 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:25:27 crc kubenswrapper[4858]: E0218 01:25:27.434375 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:25:27 crc kubenswrapper[4858]: E0218 01:25:27.434880 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:25:34 crc kubenswrapper[4858]: I0218 01:25:34.419877 4858 scope.go:117] "RemoveContainer" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" Feb 18 01:25:34 crc kubenswrapper[4858]: E0218 01:25:34.420932 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:25:40 crc kubenswrapper[4858]: E0218 01:25:40.422615 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:25:41 crc kubenswrapper[4858]: E0218 01:25:41.423029 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:25:45 crc kubenswrapper[4858]: I0218 01:25:45.420888 4858 scope.go:117] "RemoveContainer" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" Feb 18 01:25:45 crc kubenswrapper[4858]: E0218 01:25:45.422053 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:25:53 crc kubenswrapper[4858]: E0218 01:25:53.425715 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:25:56 crc kubenswrapper[4858]: E0218 01:25:56.421559 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:26:00 crc kubenswrapper[4858]: I0218 01:26:00.419609 4858 scope.go:117] "RemoveContainer" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" Feb 18 01:26:00 crc kubenswrapper[4858]: E0218 01:26:00.420127 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:26:07 crc kubenswrapper[4858]: E0218 01:26:07.439035 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:26:10 crc kubenswrapper[4858]: E0218 01:26:10.423248 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:26:14 crc kubenswrapper[4858]: I0218 01:26:14.420292 4858 scope.go:117] "RemoveContainer" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" Feb 18 01:26:14 crc kubenswrapper[4858]: E0218 01:26:14.421343 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:26:18 crc kubenswrapper[4858]: E0218 01:26:18.420988 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:26:21 crc kubenswrapper[4858]: E0218 01:26:21.423339 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:26:29 crc kubenswrapper[4858]: I0218 01:26:29.420585 4858 scope.go:117] "RemoveContainer" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" Feb 18 01:26:29 crc kubenswrapper[4858]: E0218 01:26:29.421416 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:26:29 crc kubenswrapper[4858]: E0218 01:26:29.423624 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:26:33 crc kubenswrapper[4858]: E0218 01:26:33.424326 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:26:40 crc kubenswrapper[4858]: E0218 01:26:40.422727 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:26:44 crc kubenswrapper[4858]: I0218 01:26:44.420067 4858 scope.go:117] "RemoveContainer" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" Feb 18 01:26:44 crc kubenswrapper[4858]: E0218 01:26:44.421048 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:26:47 crc kubenswrapper[4858]: E0218 01:26:47.442364 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:26:54 crc kubenswrapper[4858]: E0218 01:26:54.422369 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:26:58 crc kubenswrapper[4858]: I0218 01:26:58.419322 4858 scope.go:117] "RemoveContainer" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" Feb 18 01:26:58 crc kubenswrapper[4858]: E0218 01:26:58.420801 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:26:59 crc kubenswrapper[4858]: E0218 01:26:59.422534 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:27:09 crc kubenswrapper[4858]: E0218 01:27:09.421755 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:27:12 crc kubenswrapper[4858]: I0218 01:27:12.420094 4858 scope.go:117] "RemoveContainer" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" Feb 18 01:27:12 crc kubenswrapper[4858]: E0218 01:27:12.420598 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:27:12 crc kubenswrapper[4858]: E0218 01:27:12.422017 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:27:16 crc kubenswrapper[4858]: I0218 01:27:16.701147 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d7djm"] Feb 18 01:27:16 crc kubenswrapper[4858]: E0218 01:27:16.702018 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0796b11-5075-422e-aeeb-2e186e20dc86" containerName="extract-content" Feb 18 01:27:16 crc kubenswrapper[4858]: I0218 01:27:16.702029 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0796b11-5075-422e-aeeb-2e186e20dc86" containerName="extract-content" Feb 18 01:27:16 crc kubenswrapper[4858]: E0218 01:27:16.702067 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0796b11-5075-422e-aeeb-2e186e20dc86" containerName="extract-utilities" Feb 18 01:27:16 crc kubenswrapper[4858]: I0218 01:27:16.702073 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0796b11-5075-422e-aeeb-2e186e20dc86" containerName="extract-utilities" Feb 18 01:27:16 crc kubenswrapper[4858]: E0218 01:27:16.702094 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0796b11-5075-422e-aeeb-2e186e20dc86" containerName="registry-server" Feb 18 01:27:16 crc kubenswrapper[4858]: I0218 01:27:16.702100 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0796b11-5075-422e-aeeb-2e186e20dc86" containerName="registry-server" Feb 18 01:27:16 crc kubenswrapper[4858]: I0218 01:27:16.702300 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0796b11-5075-422e-aeeb-2e186e20dc86" containerName="registry-server" Feb 18 01:27:16 crc kubenswrapper[4858]: I0218 01:27:16.703780 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7djm" Feb 18 01:27:16 crc kubenswrapper[4858]: I0218 01:27:16.720424 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d7djm"] Feb 18 01:27:16 crc kubenswrapper[4858]: I0218 01:27:16.794816 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d382d1e-b1d5-4fce-892c-733fc627687b-catalog-content\") pod \"redhat-operators-d7djm\" (UID: \"5d382d1e-b1d5-4fce-892c-733fc627687b\") " pod="openshift-marketplace/redhat-operators-d7djm" Feb 18 01:27:16 crc kubenswrapper[4858]: I0218 01:27:16.795123 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss296\" (UniqueName: \"kubernetes.io/projected/5d382d1e-b1d5-4fce-892c-733fc627687b-kube-api-access-ss296\") pod \"redhat-operators-d7djm\" (UID: \"5d382d1e-b1d5-4fce-892c-733fc627687b\") " pod="openshift-marketplace/redhat-operators-d7djm" Feb 18 01:27:16 crc kubenswrapper[4858]: I0218 01:27:16.795209 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d382d1e-b1d5-4fce-892c-733fc627687b-utilities\") pod \"redhat-operators-d7djm\" (UID: \"5d382d1e-b1d5-4fce-892c-733fc627687b\") " pod="openshift-marketplace/redhat-operators-d7djm" Feb 18 01:27:16 crc kubenswrapper[4858]: I0218 01:27:16.896897 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d382d1e-b1d5-4fce-892c-733fc627687b-catalog-content\") pod \"redhat-operators-d7djm\" (UID: \"5d382d1e-b1d5-4fce-892c-733fc627687b\") " pod="openshift-marketplace/redhat-operators-d7djm" Feb 18 01:27:16 crc kubenswrapper[4858]: I0218 01:27:16.897049 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss296\" (UniqueName: \"kubernetes.io/projected/5d382d1e-b1d5-4fce-892c-733fc627687b-kube-api-access-ss296\") pod \"redhat-operators-d7djm\" (UID: \"5d382d1e-b1d5-4fce-892c-733fc627687b\") " pod="openshift-marketplace/redhat-operators-d7djm" Feb 18 01:27:16 crc kubenswrapper[4858]: I0218 01:27:16.897087 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d382d1e-b1d5-4fce-892c-733fc627687b-utilities\") pod \"redhat-operators-d7djm\" (UID: \"5d382d1e-b1d5-4fce-892c-733fc627687b\") " pod="openshift-marketplace/redhat-operators-d7djm" Feb 18 01:27:16 crc kubenswrapper[4858]: I0218 01:27:16.897647 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d382d1e-b1d5-4fce-892c-733fc627687b-utilities\") pod \"redhat-operators-d7djm\" (UID: \"5d382d1e-b1d5-4fce-892c-733fc627687b\") " pod="openshift-marketplace/redhat-operators-d7djm" Feb 18 01:27:16 crc kubenswrapper[4858]: I0218 01:27:16.897771 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d382d1e-b1d5-4fce-892c-733fc627687b-catalog-content\") pod \"redhat-operators-d7djm\" (UID: \"5d382d1e-b1d5-4fce-892c-733fc627687b\") " pod="openshift-marketplace/redhat-operators-d7djm" Feb 18 01:27:16 crc kubenswrapper[4858]: I0218 01:27:16.920685 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss296\" (UniqueName: \"kubernetes.io/projected/5d382d1e-b1d5-4fce-892c-733fc627687b-kube-api-access-ss296\") pod \"redhat-operators-d7djm\" (UID: \"5d382d1e-b1d5-4fce-892c-733fc627687b\") " pod="openshift-marketplace/redhat-operators-d7djm" Feb 18 01:27:17 crc kubenswrapper[4858]: I0218 01:27:17.029198 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7djm" Feb 18 01:27:17 crc kubenswrapper[4858]: I0218 01:27:17.542854 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d7djm"] Feb 18 01:27:18 crc kubenswrapper[4858]: I0218 01:27:18.137132 4858 generic.go:334] "Generic (PLEG): container finished" podID="5d382d1e-b1d5-4fce-892c-733fc627687b" containerID="eb4153716bba9559e7b269df8e5a639eb8252a8eb6ff53292a7f38d7ce60178c" exitCode=0 Feb 18 01:27:18 crc kubenswrapper[4858]: I0218 01:27:18.137181 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7djm" event={"ID":"5d382d1e-b1d5-4fce-892c-733fc627687b","Type":"ContainerDied","Data":"eb4153716bba9559e7b269df8e5a639eb8252a8eb6ff53292a7f38d7ce60178c"} Feb 18 01:27:18 crc kubenswrapper[4858]: I0218 01:27:18.138262 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7djm" event={"ID":"5d382d1e-b1d5-4fce-892c-733fc627687b","Type":"ContainerStarted","Data":"88014ff6cbb353393ae875e8eed1532dd603a62791b762b7c28a03e350a7c7df"} Feb 18 01:27:19 crc kubenswrapper[4858]: I0218 01:27:19.150009 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7djm" event={"ID":"5d382d1e-b1d5-4fce-892c-733fc627687b","Type":"ContainerStarted","Data":"5e517e19ab8314474a5afe787df75dfe0a4f138185545919be5d8b820caba94d"} Feb 18 01:27:20 crc kubenswrapper[4858]: E0218 01:27:20.422064 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:27:23 crc kubenswrapper[4858]: E0218 01:27:23.420520 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:27:24 crc kubenswrapper[4858]: I0218 01:27:24.228095 4858 generic.go:334] "Generic (PLEG): container finished" podID="5d382d1e-b1d5-4fce-892c-733fc627687b" containerID="5e517e19ab8314474a5afe787df75dfe0a4f138185545919be5d8b820caba94d" exitCode=0 Feb 18 01:27:24 crc kubenswrapper[4858]: I0218 01:27:24.228159 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7djm" event={"ID":"5d382d1e-b1d5-4fce-892c-733fc627687b","Type":"ContainerDied","Data":"5e517e19ab8314474a5afe787df75dfe0a4f138185545919be5d8b820caba94d"} Feb 18 01:27:24 crc kubenswrapper[4858]: I0218 01:27:24.420123 4858 scope.go:117] "RemoveContainer" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" Feb 18 01:27:24 crc kubenswrapper[4858]: E0218 01:27:24.420660 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:27:25 crc kubenswrapper[4858]: I0218 01:27:25.270572 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7djm" event={"ID":"5d382d1e-b1d5-4fce-892c-733fc627687b","Type":"ContainerStarted","Data":"f663a9e1d491bcd9b664a5560e1f2063154f5822ecefe9c4860344ed3aee2dac"} Feb 18 01:27:25 crc kubenswrapper[4858]: I0218 01:27:25.326053 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d7djm" podStartSLOduration=2.672392083 podStartE2EDuration="9.326027115s" podCreationTimestamp="2026-02-18 01:27:16 +0000 UTC" firstStartedPulling="2026-02-18 01:27:18.140187102 +0000 UTC m=+3191.446023834" lastFinishedPulling="2026-02-18 01:27:24.793822124 +0000 UTC m=+3198.099658866" observedRunningTime="2026-02-18 01:27:25.291316008 +0000 UTC m=+3198.597152750" watchObservedRunningTime="2026-02-18 01:27:25.326027115 +0000 UTC m=+3198.631863857" Feb 18 01:27:27 crc kubenswrapper[4858]: I0218 01:27:27.029783 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d7djm" Feb 18 01:27:27 crc kubenswrapper[4858]: I0218 01:27:27.030103 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d7djm" Feb 18 01:27:28 crc kubenswrapper[4858]: I0218 01:27:28.082000 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d7djm" podUID="5d382d1e-b1d5-4fce-892c-733fc627687b" containerName="registry-server" probeResult="failure" output=< Feb 18 01:27:28 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Feb 18 01:27:28 crc kubenswrapper[4858]: > Feb 18 01:27:32 crc kubenswrapper[4858]: E0218 01:27:32.421590 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:27:36 crc kubenswrapper[4858]: I0218 01:27:36.420165 4858 scope.go:117] "RemoveContainer" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" Feb 18 01:27:36 crc kubenswrapper[4858]: E0218 01:27:36.420911 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:27:36 crc kubenswrapper[4858]: E0218 01:27:36.421811 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:27:37 crc kubenswrapper[4858]: I0218 01:27:37.129198 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d7djm" Feb 18 01:27:37 crc kubenswrapper[4858]: I0218 01:27:37.175513 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d7djm" Feb 18 01:27:37 crc kubenswrapper[4858]: I0218 01:27:37.366771 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d7djm"] Feb 18 01:27:38 crc kubenswrapper[4858]: I0218 01:27:38.417387 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d7djm" podUID="5d382d1e-b1d5-4fce-892c-733fc627687b" containerName="registry-server" containerID="cri-o://f663a9e1d491bcd9b664a5560e1f2063154f5822ecefe9c4860344ed3aee2dac" gracePeriod=2 Feb 18 01:27:39 crc kubenswrapper[4858]: I0218 01:27:39.021577 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7djm" Feb 18 01:27:39 crc kubenswrapper[4858]: I0218 01:27:39.135934 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d382d1e-b1d5-4fce-892c-733fc627687b-catalog-content\") pod \"5d382d1e-b1d5-4fce-892c-733fc627687b\" (UID: \"5d382d1e-b1d5-4fce-892c-733fc627687b\") " Feb 18 01:27:39 crc kubenswrapper[4858]: I0218 01:27:39.136066 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss296\" (UniqueName: \"kubernetes.io/projected/5d382d1e-b1d5-4fce-892c-733fc627687b-kube-api-access-ss296\") pod \"5d382d1e-b1d5-4fce-892c-733fc627687b\" (UID: \"5d382d1e-b1d5-4fce-892c-733fc627687b\") " Feb 18 01:27:39 crc kubenswrapper[4858]: I0218 01:27:39.136188 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d382d1e-b1d5-4fce-892c-733fc627687b-utilities\") pod \"5d382d1e-b1d5-4fce-892c-733fc627687b\" (UID: \"5d382d1e-b1d5-4fce-892c-733fc627687b\") " Feb 18 01:27:39 crc kubenswrapper[4858]: I0218 01:27:39.137425 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d382d1e-b1d5-4fce-892c-733fc627687b-utilities" (OuterVolumeSpecName: "utilities") pod "5d382d1e-b1d5-4fce-892c-733fc627687b" (UID: "5d382d1e-b1d5-4fce-892c-733fc627687b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:27:39 crc kubenswrapper[4858]: I0218 01:27:39.164274 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d382d1e-b1d5-4fce-892c-733fc627687b-kube-api-access-ss296" (OuterVolumeSpecName: "kube-api-access-ss296") pod "5d382d1e-b1d5-4fce-892c-733fc627687b" (UID: "5d382d1e-b1d5-4fce-892c-733fc627687b"). InnerVolumeSpecName "kube-api-access-ss296". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:27:39 crc kubenswrapper[4858]: I0218 01:27:39.239461 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d382d1e-b1d5-4fce-892c-733fc627687b-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:27:39 crc kubenswrapper[4858]: I0218 01:27:39.239645 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss296\" (UniqueName: \"kubernetes.io/projected/5d382d1e-b1d5-4fce-892c-733fc627687b-kube-api-access-ss296\") on node \"crc\" DevicePath \"\"" Feb 18 01:27:39 crc kubenswrapper[4858]: I0218 01:27:39.261135 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d382d1e-b1d5-4fce-892c-733fc627687b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d382d1e-b1d5-4fce-892c-733fc627687b" (UID: "5d382d1e-b1d5-4fce-892c-733fc627687b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:27:39 crc kubenswrapper[4858]: I0218 01:27:39.342196 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d382d1e-b1d5-4fce-892c-733fc627687b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:27:39 crc kubenswrapper[4858]: I0218 01:27:39.429704 4858 generic.go:334] "Generic (PLEG): container finished" podID="5d382d1e-b1d5-4fce-892c-733fc627687b" containerID="f663a9e1d491bcd9b664a5560e1f2063154f5822ecefe9c4860344ed3aee2dac" exitCode=0 Feb 18 01:27:39 crc kubenswrapper[4858]: I0218 01:27:39.429801 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7djm" Feb 18 01:27:39 crc kubenswrapper[4858]: I0218 01:27:39.435169 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7djm" event={"ID":"5d382d1e-b1d5-4fce-892c-733fc627687b","Type":"ContainerDied","Data":"f663a9e1d491bcd9b664a5560e1f2063154f5822ecefe9c4860344ed3aee2dac"} Feb 18 01:27:39 crc kubenswrapper[4858]: I0218 01:27:39.435216 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7djm" event={"ID":"5d382d1e-b1d5-4fce-892c-733fc627687b","Type":"ContainerDied","Data":"88014ff6cbb353393ae875e8eed1532dd603a62791b762b7c28a03e350a7c7df"} Feb 18 01:27:39 crc kubenswrapper[4858]: I0218 01:27:39.435236 4858 scope.go:117] "RemoveContainer" containerID="f663a9e1d491bcd9b664a5560e1f2063154f5822ecefe9c4860344ed3aee2dac" Feb 18 01:27:39 crc kubenswrapper[4858]: I0218 01:27:39.466989 4858 scope.go:117] "RemoveContainer" containerID="5e517e19ab8314474a5afe787df75dfe0a4f138185545919be5d8b820caba94d" Feb 18 01:27:39 crc kubenswrapper[4858]: I0218 01:27:39.481860 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d7djm"] Feb 18 01:27:39 crc kubenswrapper[4858]: I0218 01:27:39.492188 4858 scope.go:117] "RemoveContainer" containerID="eb4153716bba9559e7b269df8e5a639eb8252a8eb6ff53292a7f38d7ce60178c" Feb 18 01:27:39 crc kubenswrapper[4858]: I0218 01:27:39.493573 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d7djm"] Feb 18 01:27:39 crc kubenswrapper[4858]: I0218 01:27:39.542447 4858 scope.go:117] "RemoveContainer" containerID="f663a9e1d491bcd9b664a5560e1f2063154f5822ecefe9c4860344ed3aee2dac" Feb 18 01:27:39 crc kubenswrapper[4858]: E0218 01:27:39.547150 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f663a9e1d491bcd9b664a5560e1f2063154f5822ecefe9c4860344ed3aee2dac\": container with ID starting with f663a9e1d491bcd9b664a5560e1f2063154f5822ecefe9c4860344ed3aee2dac not found: ID does not exist" containerID="f663a9e1d491bcd9b664a5560e1f2063154f5822ecefe9c4860344ed3aee2dac" Feb 18 01:27:39 crc kubenswrapper[4858]: I0218 01:27:39.547194 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f663a9e1d491bcd9b664a5560e1f2063154f5822ecefe9c4860344ed3aee2dac"} err="failed to get container status \"f663a9e1d491bcd9b664a5560e1f2063154f5822ecefe9c4860344ed3aee2dac\": rpc error: code = NotFound desc = could not find container \"f663a9e1d491bcd9b664a5560e1f2063154f5822ecefe9c4860344ed3aee2dac\": container with ID starting with f663a9e1d491bcd9b664a5560e1f2063154f5822ecefe9c4860344ed3aee2dac not found: ID does not exist" Feb 18 01:27:39 crc kubenswrapper[4858]: I0218 01:27:39.547218 4858 scope.go:117] "RemoveContainer" containerID="5e517e19ab8314474a5afe787df75dfe0a4f138185545919be5d8b820caba94d" Feb 18 01:27:39 crc kubenswrapper[4858]: E0218 01:27:39.547592 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e517e19ab8314474a5afe787df75dfe0a4f138185545919be5d8b820caba94d\": container with ID starting with 5e517e19ab8314474a5afe787df75dfe0a4f138185545919be5d8b820caba94d not found: ID does not exist" containerID="5e517e19ab8314474a5afe787df75dfe0a4f138185545919be5d8b820caba94d" Feb 18 01:27:39 crc kubenswrapper[4858]: I0218 01:27:39.547631 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e517e19ab8314474a5afe787df75dfe0a4f138185545919be5d8b820caba94d"} err="failed to get container status \"5e517e19ab8314474a5afe787df75dfe0a4f138185545919be5d8b820caba94d\": rpc error: code = NotFound desc = could not find container \"5e517e19ab8314474a5afe787df75dfe0a4f138185545919be5d8b820caba94d\": container with ID starting with 5e517e19ab8314474a5afe787df75dfe0a4f138185545919be5d8b820caba94d not found: ID does not exist" Feb 18 01:27:39 crc kubenswrapper[4858]: I0218 01:27:39.547661 4858 scope.go:117] "RemoveContainer" containerID="eb4153716bba9559e7b269df8e5a639eb8252a8eb6ff53292a7f38d7ce60178c" Feb 18 01:27:39 crc kubenswrapper[4858]: E0218 01:27:39.548116 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb4153716bba9559e7b269df8e5a639eb8252a8eb6ff53292a7f38d7ce60178c\": container with ID starting with eb4153716bba9559e7b269df8e5a639eb8252a8eb6ff53292a7f38d7ce60178c not found: ID does not exist" containerID="eb4153716bba9559e7b269df8e5a639eb8252a8eb6ff53292a7f38d7ce60178c" Feb 18 01:27:39 crc kubenswrapper[4858]: I0218 01:27:39.548139 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb4153716bba9559e7b269df8e5a639eb8252a8eb6ff53292a7f38d7ce60178c"} err="failed to get container status \"eb4153716bba9559e7b269df8e5a639eb8252a8eb6ff53292a7f38d7ce60178c\": rpc error: code = NotFound desc = could not find container \"eb4153716bba9559e7b269df8e5a639eb8252a8eb6ff53292a7f38d7ce60178c\": container with ID starting with eb4153716bba9559e7b269df8e5a639eb8252a8eb6ff53292a7f38d7ce60178c not found: ID does not exist" Feb 18 01:27:41 crc kubenswrapper[4858]: I0218 01:27:41.433191 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d382d1e-b1d5-4fce-892c-733fc627687b" path="/var/lib/kubelet/pods/5d382d1e-b1d5-4fce-892c-733fc627687b/volumes" Feb 18 01:27:45 crc kubenswrapper[4858]: E0218 01:27:45.421556 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:27:47 crc kubenswrapper[4858]: I0218 01:27:47.428185 4858 scope.go:117] "RemoveContainer" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" Feb 18 01:27:47 crc kubenswrapper[4858]: E0218 01:27:47.429615 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:27:48 crc kubenswrapper[4858]: E0218 01:27:48.423041 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:27:59 crc kubenswrapper[4858]: E0218 01:27:59.422754 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:28:00 crc kubenswrapper[4858]: I0218 01:28:00.419464 4858 scope.go:117] "RemoveContainer" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" Feb 18 01:28:01 crc kubenswrapper[4858]: I0218 01:28:01.665995 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerStarted","Data":"e391c93a1a146aadedb1ac36b6c2e49907bc1ff8030357e7dec22026586763db"} Feb 18 01:28:03 crc kubenswrapper[4858]: E0218 01:28:03.422572 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:28:13 crc kubenswrapper[4858]: I0218 01:28:13.423545 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 01:28:13 crc kubenswrapper[4858]: E0218 01:28:13.537541 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 01:28:13 crc kubenswrapper[4858]: E0218 01:28:13.537628 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 01:28:13 crc kubenswrapper[4858]: E0218 01:28:13.537830 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t22t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-h2mps_openstack(8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:28:13 crc kubenswrapper[4858]: E0218 01:28:13.539070 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:28:15 crc kubenswrapper[4858]: E0218 01:28:15.421865 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:28:21 crc kubenswrapper[4858]: I0218 01:28:21.891226 4858 generic.go:334] "Generic (PLEG): container finished" podID="b76d04a7-6eb2-4a9a-8934-ff0cea670d77" containerID="27e3d2d79f3812eced35fa3df57a528c5ed7cebec425d416103e7196a77dd912" exitCode=2 Feb 18 01:28:21 crc kubenswrapper[4858]: I0218 01:28:21.891309 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq" event={"ID":"b76d04a7-6eb2-4a9a-8934-ff0cea670d77","Type":"ContainerDied","Data":"27e3d2d79f3812eced35fa3df57a528c5ed7cebec425d416103e7196a77dd912"} Feb 18 01:28:23 crc kubenswrapper[4858]: I0218 01:28:23.389599 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq" Feb 18 01:28:23 crc kubenswrapper[4858]: I0218 01:28:23.529946 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b76d04a7-6eb2-4a9a-8934-ff0cea670d77-inventory\") pod \"b76d04a7-6eb2-4a9a-8934-ff0cea670d77\" (UID: \"b76d04a7-6eb2-4a9a-8934-ff0cea670d77\") " Feb 18 01:28:23 crc kubenswrapper[4858]: I0218 01:28:23.530056 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b76d04a7-6eb2-4a9a-8934-ff0cea670d77-ssh-key-openstack-edpm-ipam\") pod \"b76d04a7-6eb2-4a9a-8934-ff0cea670d77\" (UID: \"b76d04a7-6eb2-4a9a-8934-ff0cea670d77\") " Feb 18 01:28:23 crc kubenswrapper[4858]: I0218 01:28:23.530162 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptwgf\" (UniqueName: \"kubernetes.io/projected/b76d04a7-6eb2-4a9a-8934-ff0cea670d77-kube-api-access-ptwgf\") pod \"b76d04a7-6eb2-4a9a-8934-ff0cea670d77\" (UID: \"b76d04a7-6eb2-4a9a-8934-ff0cea670d77\") " Feb 18 01:28:23 crc kubenswrapper[4858]: I0218 01:28:23.575077 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b76d04a7-6eb2-4a9a-8934-ff0cea670d77-kube-api-access-ptwgf" (OuterVolumeSpecName: "kube-api-access-ptwgf") pod "b76d04a7-6eb2-4a9a-8934-ff0cea670d77" (UID: "b76d04a7-6eb2-4a9a-8934-ff0cea670d77"). InnerVolumeSpecName "kube-api-access-ptwgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:28:23 crc kubenswrapper[4858]: I0218 01:28:23.630632 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b76d04a7-6eb2-4a9a-8934-ff0cea670d77-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b76d04a7-6eb2-4a9a-8934-ff0cea670d77" (UID: "b76d04a7-6eb2-4a9a-8934-ff0cea670d77"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:28:23 crc kubenswrapper[4858]: I0218 01:28:23.645679 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b76d04a7-6eb2-4a9a-8934-ff0cea670d77-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 01:28:23 crc kubenswrapper[4858]: I0218 01:28:23.645709 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptwgf\" (UniqueName: \"kubernetes.io/projected/b76d04a7-6eb2-4a9a-8934-ff0cea670d77-kube-api-access-ptwgf\") on node \"crc\" DevicePath \"\"" Feb 18 01:28:23 crc kubenswrapper[4858]: I0218 01:28:23.648347 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b76d04a7-6eb2-4a9a-8934-ff0cea670d77-inventory" (OuterVolumeSpecName: "inventory") pod "b76d04a7-6eb2-4a9a-8934-ff0cea670d77" (UID: "b76d04a7-6eb2-4a9a-8934-ff0cea670d77"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:28:23 crc kubenswrapper[4858]: I0218 01:28:23.747539 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b76d04a7-6eb2-4a9a-8934-ff0cea670d77-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 01:28:23 crc kubenswrapper[4858]: I0218 01:28:23.910373 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq" event={"ID":"b76d04a7-6eb2-4a9a-8934-ff0cea670d77","Type":"ContainerDied","Data":"86e71319361e7f7e4d5aaaa0059fb483f4b0f5c92fcada4c2da7c50321f5aa65"} Feb 18 01:28:23 crc kubenswrapper[4858]: I0218 01:28:23.910403 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq" Feb 18 01:28:23 crc kubenswrapper[4858]: I0218 01:28:23.910417 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86e71319361e7f7e4d5aaaa0059fb483f4b0f5c92fcada4c2da7c50321f5aa65" Feb 18 01:28:24 crc kubenswrapper[4858]: E0218 01:28:24.423764 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:28:28 crc kubenswrapper[4858]: E0218 01:28:28.545557 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:28:28 crc kubenswrapper[4858]: E0218 01:28:28.546097 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:28:28 crc kubenswrapper[4858]: E0218 01:28:28.546229 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n75h65dh56ch8chbh597h687h696h57bhdchcbhb6h9h648hb6h686h666h57ch557h55ch68ch76h686h5f7hb7hc6h68hdh67fh8bhd7hbcq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6qb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(1b28954c-8d35-4f43-a44b-307a56f6fff5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:28:28 crc kubenswrapper[4858]: E0218 01:28:28.547426 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:28:38 crc kubenswrapper[4858]: E0218 01:28:38.422710 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:28:42 crc kubenswrapper[4858]: E0218 01:28:42.423464 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:28:49 crc kubenswrapper[4858]: E0218 01:28:49.423002 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:28:55 crc kubenswrapper[4858]: E0218 01:28:55.421550 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:29:04 crc kubenswrapper[4858]: E0218 01:29:04.422313 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:29:06 crc kubenswrapper[4858]: E0218 01:29:06.422354 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:29:18 crc kubenswrapper[4858]: E0218 01:29:18.423633 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:29:19 crc kubenswrapper[4858]: E0218 01:29:19.422830 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:29:30 crc kubenswrapper[4858]: E0218 01:29:30.422106 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:29:32 crc kubenswrapper[4858]: E0218 01:29:32.422332 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:29:41 crc kubenswrapper[4858]: I0218 01:29:41.054953 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v55bj"] Feb 18 01:29:41 crc kubenswrapper[4858]: E0218 01:29:41.056475 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d382d1e-b1d5-4fce-892c-733fc627687b" containerName="extract-utilities" Feb 18 01:29:41 crc kubenswrapper[4858]: I0218 01:29:41.056519 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d382d1e-b1d5-4fce-892c-733fc627687b" containerName="extract-utilities" Feb 18 01:29:41 crc kubenswrapper[4858]: E0218 01:29:41.056571 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d382d1e-b1d5-4fce-892c-733fc627687b" containerName="extract-content" Feb 18 01:29:41 crc kubenswrapper[4858]: I0218 01:29:41.056584 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d382d1e-b1d5-4fce-892c-733fc627687b" containerName="extract-content" Feb 18 01:29:41 crc kubenswrapper[4858]: E0218 01:29:41.056610 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b76d04a7-6eb2-4a9a-8934-ff0cea670d77" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 01:29:41 crc kubenswrapper[4858]: I0218 01:29:41.056621 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b76d04a7-6eb2-4a9a-8934-ff0cea670d77" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 01:29:41 crc kubenswrapper[4858]: E0218 01:29:41.056642 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d382d1e-b1d5-4fce-892c-733fc627687b" containerName="registry-server" Feb 18 01:29:41 crc kubenswrapper[4858]: I0218 01:29:41.056651 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d382d1e-b1d5-4fce-892c-733fc627687b" containerName="registry-server" Feb 18 01:29:41 crc kubenswrapper[4858]: I0218 01:29:41.057009 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b76d04a7-6eb2-4a9a-8934-ff0cea670d77" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 01:29:41 crc kubenswrapper[4858]: I0218 01:29:41.057043 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d382d1e-b1d5-4fce-892c-733fc627687b" containerName="registry-server" Feb 18 01:29:41 crc kubenswrapper[4858]: I0218 01:29:41.058406 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v55bj" Feb 18 01:29:41 crc kubenswrapper[4858]: I0218 01:29:41.061159 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 01:29:41 crc kubenswrapper[4858]: I0218 01:29:41.061407 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 01:29:41 crc kubenswrapper[4858]: I0218 01:29:41.061725 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 01:29:41 crc kubenswrapper[4858]: I0218 01:29:41.061877 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ts27" Feb 18 01:29:41 crc kubenswrapper[4858]: I0218 01:29:41.073375 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v55bj"] Feb 18 01:29:41 crc kubenswrapper[4858]: I0218 01:29:41.076523 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bcd6a468-3c13-4a07-af88-b78f12b9de4f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v55bj\" (UID: \"bcd6a468-3c13-4a07-af88-b78f12b9de4f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v55bj" Feb 18 01:29:41 crc kubenswrapper[4858]: I0218 01:29:41.076744 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t97xr\" (UniqueName: \"kubernetes.io/projected/bcd6a468-3c13-4a07-af88-b78f12b9de4f-kube-api-access-t97xr\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v55bj\" (UID: \"bcd6a468-3c13-4a07-af88-b78f12b9de4f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v55bj" Feb 18 01:29:41 crc kubenswrapper[4858]: I0218 01:29:41.076811 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bcd6a468-3c13-4a07-af88-b78f12b9de4f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v55bj\" (UID: \"bcd6a468-3c13-4a07-af88-b78f12b9de4f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v55bj" Feb 18 01:29:41 crc kubenswrapper[4858]: I0218 01:29:41.178060 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t97xr\" (UniqueName: \"kubernetes.io/projected/bcd6a468-3c13-4a07-af88-b78f12b9de4f-kube-api-access-t97xr\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v55bj\" (UID: \"bcd6a468-3c13-4a07-af88-b78f12b9de4f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v55bj" Feb 18 01:29:41 crc kubenswrapper[4858]: I0218 01:29:41.178125 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bcd6a468-3c13-4a07-af88-b78f12b9de4f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v55bj\" (UID: \"bcd6a468-3c13-4a07-af88-b78f12b9de4f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v55bj" Feb 18 01:29:41 crc kubenswrapper[4858]: I0218 01:29:41.178172 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bcd6a468-3c13-4a07-af88-b78f12b9de4f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v55bj\" (UID: \"bcd6a468-3c13-4a07-af88-b78f12b9de4f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v55bj" Feb 18 01:29:41 crc kubenswrapper[4858]: I0218 01:29:41.183731 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bcd6a468-3c13-4a07-af88-b78f12b9de4f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v55bj\" (UID: \"bcd6a468-3c13-4a07-af88-b78f12b9de4f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v55bj" Feb 18 01:29:41 crc kubenswrapper[4858]: I0218 01:29:41.188542 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bcd6a468-3c13-4a07-af88-b78f12b9de4f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v55bj\" (UID: \"bcd6a468-3c13-4a07-af88-b78f12b9de4f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v55bj" Feb 18 01:29:41 crc kubenswrapper[4858]: I0218 01:29:41.195995 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t97xr\" (UniqueName: \"kubernetes.io/projected/bcd6a468-3c13-4a07-af88-b78f12b9de4f-kube-api-access-t97xr\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v55bj\" (UID: \"bcd6a468-3c13-4a07-af88-b78f12b9de4f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v55bj" Feb 18 01:29:41 crc kubenswrapper[4858]: I0218 01:29:41.407082 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v55bj" Feb 18 01:29:42 crc kubenswrapper[4858]: I0218 01:29:42.043389 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v55bj"] Feb 18 01:29:42 crc kubenswrapper[4858]: W0218 01:29:42.058964 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcd6a468_3c13_4a07_af88_b78f12b9de4f.slice/crio-7f6cc97744d1c159cf30c555ee8673b93a39213183d2dc1ac5ca2d90f849bc23 WatchSource:0}: Error finding container 7f6cc97744d1c159cf30c555ee8673b93a39213183d2dc1ac5ca2d90f849bc23: Status 404 returned error can't find the container with id 7f6cc97744d1c159cf30c555ee8673b93a39213183d2dc1ac5ca2d90f849bc23 Feb 18 01:29:42 crc kubenswrapper[4858]: E0218 01:29:42.421580 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:29:42 crc kubenswrapper[4858]: I0218 01:29:42.780169 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v55bj" event={"ID":"bcd6a468-3c13-4a07-af88-b78f12b9de4f","Type":"ContainerStarted","Data":"7f6cc97744d1c159cf30c555ee8673b93a39213183d2dc1ac5ca2d90f849bc23"} Feb 18 01:29:43 crc kubenswrapper[4858]: I0218 01:29:43.796961 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v55bj" event={"ID":"bcd6a468-3c13-4a07-af88-b78f12b9de4f","Type":"ContainerStarted","Data":"73b882c94b04d6d992490f29a6814eccb6d6809ff187450de4fb74e76aa0554b"} Feb 18 01:29:43 crc kubenswrapper[4858]: I0218 01:29:43.840422 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v55bj" podStartSLOduration=2.323562707 podStartE2EDuration="2.840395288s" podCreationTimestamp="2026-02-18 01:29:41 +0000 UTC" firstStartedPulling="2026-02-18 01:29:42.061994965 +0000 UTC m=+3335.367831737" lastFinishedPulling="2026-02-18 01:29:42.578827566 +0000 UTC m=+3335.884664318" observedRunningTime="2026-02-18 01:29:43.816730168 +0000 UTC m=+3337.122566950" watchObservedRunningTime="2026-02-18 01:29:43.840395288 +0000 UTC m=+3337.146232060" Feb 18 01:29:44 crc kubenswrapper[4858]: E0218 01:29:44.422754 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:29:55 crc kubenswrapper[4858]: E0218 01:29:55.424180 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:29:55 crc kubenswrapper[4858]: E0218 01:29:55.424260 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:30:00 crc kubenswrapper[4858]: I0218 01:30:00.150284 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522970-q9dbj"] Feb 18 01:30:00 crc kubenswrapper[4858]: I0218 01:30:00.152815 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-q9dbj" Feb 18 01:30:00 crc kubenswrapper[4858]: I0218 01:30:00.155036 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 01:30:00 crc kubenswrapper[4858]: I0218 01:30:00.155424 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 01:30:00 crc kubenswrapper[4858]: I0218 01:30:00.161617 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522970-q9dbj"] Feb 18 01:30:00 crc kubenswrapper[4858]: I0218 01:30:00.294125 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ztqq\" (UniqueName: \"kubernetes.io/projected/8fa41ff3-5e85-4832-81c2-cc5b49d895f4-kube-api-access-7ztqq\") pod \"collect-profiles-29522970-q9dbj\" (UID: \"8fa41ff3-5e85-4832-81c2-cc5b49d895f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-q9dbj" Feb 18 01:30:00 crc kubenswrapper[4858]: I0218 01:30:00.294340 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8fa41ff3-5e85-4832-81c2-cc5b49d895f4-secret-volume\") pod \"collect-profiles-29522970-q9dbj\" (UID: \"8fa41ff3-5e85-4832-81c2-cc5b49d895f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-q9dbj" Feb 18 01:30:00 crc kubenswrapper[4858]: I0218 01:30:00.294459 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8fa41ff3-5e85-4832-81c2-cc5b49d895f4-config-volume\") pod \"collect-profiles-29522970-q9dbj\" (UID: \"8fa41ff3-5e85-4832-81c2-cc5b49d895f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-q9dbj" Feb 18 01:30:00 crc kubenswrapper[4858]: I0218 01:30:00.396390 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8fa41ff3-5e85-4832-81c2-cc5b49d895f4-secret-volume\") pod \"collect-profiles-29522970-q9dbj\" (UID: \"8fa41ff3-5e85-4832-81c2-cc5b49d895f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-q9dbj" Feb 18 01:30:00 crc kubenswrapper[4858]: I0218 01:30:00.396461 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8fa41ff3-5e85-4832-81c2-cc5b49d895f4-config-volume\") pod \"collect-profiles-29522970-q9dbj\" (UID: \"8fa41ff3-5e85-4832-81c2-cc5b49d895f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-q9dbj" Feb 18 01:30:00 crc kubenswrapper[4858]: I0218 01:30:00.396777 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ztqq\" (UniqueName: \"kubernetes.io/projected/8fa41ff3-5e85-4832-81c2-cc5b49d895f4-kube-api-access-7ztqq\") pod \"collect-profiles-29522970-q9dbj\" (UID: \"8fa41ff3-5e85-4832-81c2-cc5b49d895f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-q9dbj" Feb 18 01:30:00 crc kubenswrapper[4858]: I0218 01:30:00.398533 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8fa41ff3-5e85-4832-81c2-cc5b49d895f4-config-volume\") pod \"collect-profiles-29522970-q9dbj\" (UID: \"8fa41ff3-5e85-4832-81c2-cc5b49d895f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-q9dbj" Feb 18 01:30:00 crc kubenswrapper[4858]: I0218 01:30:00.407324 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8fa41ff3-5e85-4832-81c2-cc5b49d895f4-secret-volume\") pod \"collect-profiles-29522970-q9dbj\" (UID: \"8fa41ff3-5e85-4832-81c2-cc5b49d895f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-q9dbj" Feb 18 01:30:00 crc kubenswrapper[4858]: I0218 01:30:00.414168 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ztqq\" (UniqueName: \"kubernetes.io/projected/8fa41ff3-5e85-4832-81c2-cc5b49d895f4-kube-api-access-7ztqq\") pod \"collect-profiles-29522970-q9dbj\" (UID: \"8fa41ff3-5e85-4832-81c2-cc5b49d895f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-q9dbj" Feb 18 01:30:00 crc kubenswrapper[4858]: I0218 01:30:00.487058 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-q9dbj" Feb 18 01:30:01 crc kubenswrapper[4858]: I0218 01:30:01.021360 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522970-q9dbj"] Feb 18 01:30:02 crc kubenswrapper[4858]: I0218 01:30:02.030387 4858 generic.go:334] "Generic (PLEG): container finished" podID="8fa41ff3-5e85-4832-81c2-cc5b49d895f4" containerID="92fc38235d422e75889d9ed453d61838ed6c3c124cc55dc1674f1f503c05cba6" exitCode=0 Feb 18 01:30:02 crc kubenswrapper[4858]: I0218 01:30:02.030518 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-q9dbj" event={"ID":"8fa41ff3-5e85-4832-81c2-cc5b49d895f4","Type":"ContainerDied","Data":"92fc38235d422e75889d9ed453d61838ed6c3c124cc55dc1674f1f503c05cba6"} Feb 18 01:30:02 crc kubenswrapper[4858]: I0218 01:30:02.031099 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-q9dbj" event={"ID":"8fa41ff3-5e85-4832-81c2-cc5b49d895f4","Type":"ContainerStarted","Data":"461a686c70bc0ae085ae8fbae6e0ebc726be9260d65e1cf02c2361d336cadd6f"} Feb 18 01:30:03 crc kubenswrapper[4858]: I0218 01:30:03.539482 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-q9dbj" Feb 18 01:30:03 crc kubenswrapper[4858]: I0218 01:30:03.662681 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ztqq\" (UniqueName: \"kubernetes.io/projected/8fa41ff3-5e85-4832-81c2-cc5b49d895f4-kube-api-access-7ztqq\") pod \"8fa41ff3-5e85-4832-81c2-cc5b49d895f4\" (UID: \"8fa41ff3-5e85-4832-81c2-cc5b49d895f4\") " Feb 18 01:30:03 crc kubenswrapper[4858]: I0218 01:30:03.662871 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8fa41ff3-5e85-4832-81c2-cc5b49d895f4-config-volume\") pod \"8fa41ff3-5e85-4832-81c2-cc5b49d895f4\" (UID: \"8fa41ff3-5e85-4832-81c2-cc5b49d895f4\") " Feb 18 01:30:03 crc kubenswrapper[4858]: I0218 01:30:03.663030 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8fa41ff3-5e85-4832-81c2-cc5b49d895f4-secret-volume\") pod \"8fa41ff3-5e85-4832-81c2-cc5b49d895f4\" (UID: \"8fa41ff3-5e85-4832-81c2-cc5b49d895f4\") " Feb 18 01:30:03 crc kubenswrapper[4858]: I0218 01:30:03.663470 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fa41ff3-5e85-4832-81c2-cc5b49d895f4-config-volume" (OuterVolumeSpecName: "config-volume") pod "8fa41ff3-5e85-4832-81c2-cc5b49d895f4" (UID: "8fa41ff3-5e85-4832-81c2-cc5b49d895f4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 01:30:03 crc kubenswrapper[4858]: I0218 01:30:03.663665 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8fa41ff3-5e85-4832-81c2-cc5b49d895f4-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 01:30:03 crc kubenswrapper[4858]: I0218 01:30:03.671050 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fa41ff3-5e85-4832-81c2-cc5b49d895f4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8fa41ff3-5e85-4832-81c2-cc5b49d895f4" (UID: "8fa41ff3-5e85-4832-81c2-cc5b49d895f4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:30:03 crc kubenswrapper[4858]: I0218 01:30:03.673648 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fa41ff3-5e85-4832-81c2-cc5b49d895f4-kube-api-access-7ztqq" (OuterVolumeSpecName: "kube-api-access-7ztqq") pod "8fa41ff3-5e85-4832-81c2-cc5b49d895f4" (UID: "8fa41ff3-5e85-4832-81c2-cc5b49d895f4"). InnerVolumeSpecName "kube-api-access-7ztqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:30:03 crc kubenswrapper[4858]: I0218 01:30:03.765553 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ztqq\" (UniqueName: \"kubernetes.io/projected/8fa41ff3-5e85-4832-81c2-cc5b49d895f4-kube-api-access-7ztqq\") on node \"crc\" DevicePath \"\"" Feb 18 01:30:03 crc kubenswrapper[4858]: I0218 01:30:03.765588 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8fa41ff3-5e85-4832-81c2-cc5b49d895f4-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 01:30:04 crc kubenswrapper[4858]: I0218 01:30:04.076435 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-q9dbj" event={"ID":"8fa41ff3-5e85-4832-81c2-cc5b49d895f4","Type":"ContainerDied","Data":"461a686c70bc0ae085ae8fbae6e0ebc726be9260d65e1cf02c2361d336cadd6f"} Feb 18 01:30:04 crc kubenswrapper[4858]: I0218 01:30:04.076485 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="461a686c70bc0ae085ae8fbae6e0ebc726be9260d65e1cf02c2361d336cadd6f" Feb 18 01:30:04 crc kubenswrapper[4858]: I0218 01:30:04.076608 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-q9dbj" Feb 18 01:30:04 crc kubenswrapper[4858]: I0218 01:30:04.648733 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522925-spxq2"] Feb 18 01:30:04 crc kubenswrapper[4858]: I0218 01:30:04.662633 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522925-spxq2"] Feb 18 01:30:05 crc kubenswrapper[4858]: I0218 01:30:05.436901 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3216e4c5-ff7a-45e4-9064-dd234a355dfb" path="/var/lib/kubelet/pods/3216e4c5-ff7a-45e4-9064-dd234a355dfb/volumes" Feb 18 01:30:06 crc kubenswrapper[4858]: I0218 01:30:06.147708 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rxg2j"] Feb 18 01:30:06 crc kubenswrapper[4858]: E0218 01:30:06.148553 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fa41ff3-5e85-4832-81c2-cc5b49d895f4" containerName="collect-profiles" Feb 18 01:30:06 crc kubenswrapper[4858]: I0218 01:30:06.148585 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fa41ff3-5e85-4832-81c2-cc5b49d895f4" containerName="collect-profiles" Feb 18 01:30:06 crc kubenswrapper[4858]: I0218 01:30:06.150682 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fa41ff3-5e85-4832-81c2-cc5b49d895f4" containerName="collect-profiles" Feb 18 01:30:06 crc kubenswrapper[4858]: I0218 01:30:06.153872 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rxg2j" Feb 18 01:30:06 crc kubenswrapper[4858]: I0218 01:30:06.185965 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rxg2j"] Feb 18 01:30:06 crc kubenswrapper[4858]: I0218 01:30:06.299262 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45421c0b-fd5f-4652-97a7-e384cb3a2217-catalog-content\") pod \"certified-operators-rxg2j\" (UID: \"45421c0b-fd5f-4652-97a7-e384cb3a2217\") " pod="openshift-marketplace/certified-operators-rxg2j" Feb 18 01:30:06 crc kubenswrapper[4858]: I0218 01:30:06.299591 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45421c0b-fd5f-4652-97a7-e384cb3a2217-utilities\") pod \"certified-operators-rxg2j\" (UID: \"45421c0b-fd5f-4652-97a7-e384cb3a2217\") " pod="openshift-marketplace/certified-operators-rxg2j" Feb 18 01:30:06 crc kubenswrapper[4858]: I0218 01:30:06.299752 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgf5d\" (UniqueName: \"kubernetes.io/projected/45421c0b-fd5f-4652-97a7-e384cb3a2217-kube-api-access-wgf5d\") pod \"certified-operators-rxg2j\" (UID: \"45421c0b-fd5f-4652-97a7-e384cb3a2217\") " pod="openshift-marketplace/certified-operators-rxg2j" Feb 18 01:30:06 crc kubenswrapper[4858]: I0218 01:30:06.401542 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45421c0b-fd5f-4652-97a7-e384cb3a2217-catalog-content\") pod \"certified-operators-rxg2j\" (UID: \"45421c0b-fd5f-4652-97a7-e384cb3a2217\") " pod="openshift-marketplace/certified-operators-rxg2j" Feb 18 01:30:06 crc kubenswrapper[4858]: I0218 01:30:06.401636 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45421c0b-fd5f-4652-97a7-e384cb3a2217-utilities\") pod \"certified-operators-rxg2j\" (UID: \"45421c0b-fd5f-4652-97a7-e384cb3a2217\") " pod="openshift-marketplace/certified-operators-rxg2j" Feb 18 01:30:06 crc kubenswrapper[4858]: I0218 01:30:06.401693 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgf5d\" (UniqueName: \"kubernetes.io/projected/45421c0b-fd5f-4652-97a7-e384cb3a2217-kube-api-access-wgf5d\") pod \"certified-operators-rxg2j\" (UID: \"45421c0b-fd5f-4652-97a7-e384cb3a2217\") " pod="openshift-marketplace/certified-operators-rxg2j" Feb 18 01:30:06 crc kubenswrapper[4858]: I0218 01:30:06.402191 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45421c0b-fd5f-4652-97a7-e384cb3a2217-catalog-content\") pod \"certified-operators-rxg2j\" (UID: \"45421c0b-fd5f-4652-97a7-e384cb3a2217\") " pod="openshift-marketplace/certified-operators-rxg2j" Feb 18 01:30:06 crc kubenswrapper[4858]: I0218 01:30:06.402201 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45421c0b-fd5f-4652-97a7-e384cb3a2217-utilities\") pod \"certified-operators-rxg2j\" (UID: \"45421c0b-fd5f-4652-97a7-e384cb3a2217\") " pod="openshift-marketplace/certified-operators-rxg2j" Feb 18 01:30:06 crc kubenswrapper[4858]: I0218 01:30:06.423110 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgf5d\" (UniqueName: \"kubernetes.io/projected/45421c0b-fd5f-4652-97a7-e384cb3a2217-kube-api-access-wgf5d\") pod \"certified-operators-rxg2j\" (UID: \"45421c0b-fd5f-4652-97a7-e384cb3a2217\") " pod="openshift-marketplace/certified-operators-rxg2j" Feb 18 01:30:06 crc kubenswrapper[4858]: I0218 01:30:06.482714 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rxg2j" Feb 18 01:30:07 crc kubenswrapper[4858]: I0218 01:30:07.037971 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rxg2j"] Feb 18 01:30:07 crc kubenswrapper[4858]: I0218 01:30:07.105695 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxg2j" event={"ID":"45421c0b-fd5f-4652-97a7-e384cb3a2217","Type":"ContainerStarted","Data":"a9878a8c692f24bc45677bea69197b38bef00dec8b66acee765e14182b55a977"} Feb 18 01:30:07 crc kubenswrapper[4858]: E0218 01:30:07.431785 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:30:08 crc kubenswrapper[4858]: I0218 01:30:08.113245 4858 generic.go:334] "Generic (PLEG): container finished" podID="45421c0b-fd5f-4652-97a7-e384cb3a2217" containerID="79bb863b88fec4758fde424282641eed99d29e136c024f16d88937d489311ce8" exitCode=0 Feb 18 01:30:08 crc kubenswrapper[4858]: I0218 01:30:08.113292 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxg2j" event={"ID":"45421c0b-fd5f-4652-97a7-e384cb3a2217","Type":"ContainerDied","Data":"79bb863b88fec4758fde424282641eed99d29e136c024f16d88937d489311ce8"} Feb 18 01:30:09 crc kubenswrapper[4858]: I0218 01:30:09.134823 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxg2j" event={"ID":"45421c0b-fd5f-4652-97a7-e384cb3a2217","Type":"ContainerStarted","Data":"358463a59095714137af37b5db2da707650cd07b7931a5c05b3334d1586b59ee"} Feb 18 01:30:10 crc kubenswrapper[4858]: I0218 01:30:10.149151 4858 generic.go:334] "Generic (PLEG): container finished" podID="45421c0b-fd5f-4652-97a7-e384cb3a2217" containerID="358463a59095714137af37b5db2da707650cd07b7931a5c05b3334d1586b59ee" exitCode=0 Feb 18 01:30:10 crc kubenswrapper[4858]: I0218 01:30:10.149242 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxg2j" event={"ID":"45421c0b-fd5f-4652-97a7-e384cb3a2217","Type":"ContainerDied","Data":"358463a59095714137af37b5db2da707650cd07b7931a5c05b3334d1586b59ee"} Feb 18 01:30:10 crc kubenswrapper[4858]: E0218 01:30:10.421695 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:30:11 crc kubenswrapper[4858]: I0218 01:30:11.180898 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxg2j" event={"ID":"45421c0b-fd5f-4652-97a7-e384cb3a2217","Type":"ContainerStarted","Data":"97608c317e393d58130757ebcf27fb24335a5d12f188d8e2dcdf675479708faa"} Feb 18 01:30:11 crc kubenswrapper[4858]: I0218 01:30:11.208491 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rxg2j" podStartSLOduration=2.746097777 podStartE2EDuration="5.208472108s" podCreationTimestamp="2026-02-18 01:30:06 +0000 UTC" firstStartedPulling="2026-02-18 01:30:08.114771488 +0000 UTC m=+3361.420608220" lastFinishedPulling="2026-02-18 01:30:10.577145809 +0000 UTC m=+3363.882982551" observedRunningTime="2026-02-18 01:30:11.200063026 +0000 UTC m=+3364.505899768" watchObservedRunningTime="2026-02-18 01:30:11.208472108 +0000 UTC m=+3364.514308850" Feb 18 01:30:16 crc kubenswrapper[4858]: I0218 01:30:16.483055 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rxg2j" Feb 18 01:30:16 crc kubenswrapper[4858]: I0218 01:30:16.483743 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rxg2j" Feb 18 01:30:16 crc kubenswrapper[4858]: I0218 01:30:16.536595 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rxg2j" Feb 18 01:30:17 crc kubenswrapper[4858]: I0218 01:30:17.342947 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rxg2j" Feb 18 01:30:17 crc kubenswrapper[4858]: I0218 01:30:17.443783 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rxg2j"] Feb 18 01:30:19 crc kubenswrapper[4858]: I0218 01:30:19.292167 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rxg2j" podUID="45421c0b-fd5f-4652-97a7-e384cb3a2217" containerName="registry-server" containerID="cri-o://97608c317e393d58130757ebcf27fb24335a5d12f188d8e2dcdf675479708faa" gracePeriod=2 Feb 18 01:30:19 crc kubenswrapper[4858]: E0218 01:30:19.422335 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:30:19 crc kubenswrapper[4858]: I0218 01:30:19.953410 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rxg2j" Feb 18 01:30:20 crc kubenswrapper[4858]: I0218 01:30:20.065628 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45421c0b-fd5f-4652-97a7-e384cb3a2217-catalog-content\") pod \"45421c0b-fd5f-4652-97a7-e384cb3a2217\" (UID: \"45421c0b-fd5f-4652-97a7-e384cb3a2217\") " Feb 18 01:30:20 crc kubenswrapper[4858]: I0218 01:30:20.065770 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45421c0b-fd5f-4652-97a7-e384cb3a2217-utilities\") pod \"45421c0b-fd5f-4652-97a7-e384cb3a2217\" (UID: \"45421c0b-fd5f-4652-97a7-e384cb3a2217\") " Feb 18 01:30:20 crc kubenswrapper[4858]: I0218 01:30:20.065908 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgf5d\" (UniqueName: \"kubernetes.io/projected/45421c0b-fd5f-4652-97a7-e384cb3a2217-kube-api-access-wgf5d\") pod \"45421c0b-fd5f-4652-97a7-e384cb3a2217\" (UID: \"45421c0b-fd5f-4652-97a7-e384cb3a2217\") " Feb 18 01:30:20 crc kubenswrapper[4858]: I0218 01:30:20.067354 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45421c0b-fd5f-4652-97a7-e384cb3a2217-utilities" (OuterVolumeSpecName: "utilities") pod "45421c0b-fd5f-4652-97a7-e384cb3a2217" (UID: "45421c0b-fd5f-4652-97a7-e384cb3a2217"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:30:20 crc kubenswrapper[4858]: I0218 01:30:20.073952 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45421c0b-fd5f-4652-97a7-e384cb3a2217-kube-api-access-wgf5d" (OuterVolumeSpecName: "kube-api-access-wgf5d") pod "45421c0b-fd5f-4652-97a7-e384cb3a2217" (UID: "45421c0b-fd5f-4652-97a7-e384cb3a2217"). InnerVolumeSpecName "kube-api-access-wgf5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:30:20 crc kubenswrapper[4858]: I0218 01:30:20.128270 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45421c0b-fd5f-4652-97a7-e384cb3a2217-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45421c0b-fd5f-4652-97a7-e384cb3a2217" (UID: "45421c0b-fd5f-4652-97a7-e384cb3a2217"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:30:20 crc kubenswrapper[4858]: I0218 01:30:20.169251 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45421c0b-fd5f-4652-97a7-e384cb3a2217-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:30:20 crc kubenswrapper[4858]: I0218 01:30:20.169292 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgf5d\" (UniqueName: \"kubernetes.io/projected/45421c0b-fd5f-4652-97a7-e384cb3a2217-kube-api-access-wgf5d\") on node \"crc\" DevicePath \"\"" Feb 18 01:30:20 crc kubenswrapper[4858]: I0218 01:30:20.169307 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45421c0b-fd5f-4652-97a7-e384cb3a2217-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:30:20 crc kubenswrapper[4858]: I0218 01:30:20.303603 4858 generic.go:334] "Generic (PLEG): container finished" podID="45421c0b-fd5f-4652-97a7-e384cb3a2217" containerID="97608c317e393d58130757ebcf27fb24335a5d12f188d8e2dcdf675479708faa" exitCode=0 Feb 18 01:30:20 crc kubenswrapper[4858]: I0218 01:30:20.303666 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxg2j" event={"ID":"45421c0b-fd5f-4652-97a7-e384cb3a2217","Type":"ContainerDied","Data":"97608c317e393d58130757ebcf27fb24335a5d12f188d8e2dcdf675479708faa"} Feb 18 01:30:20 crc kubenswrapper[4858]: I0218 01:30:20.303722 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rxg2j" Feb 18 01:30:20 crc kubenswrapper[4858]: I0218 01:30:20.303751 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxg2j" event={"ID":"45421c0b-fd5f-4652-97a7-e384cb3a2217","Type":"ContainerDied","Data":"a9878a8c692f24bc45677bea69197b38bef00dec8b66acee765e14182b55a977"} Feb 18 01:30:20 crc kubenswrapper[4858]: I0218 01:30:20.303777 4858 scope.go:117] "RemoveContainer" containerID="97608c317e393d58130757ebcf27fb24335a5d12f188d8e2dcdf675479708faa" Feb 18 01:30:20 crc kubenswrapper[4858]: I0218 01:30:20.327635 4858 scope.go:117] "RemoveContainer" containerID="358463a59095714137af37b5db2da707650cd07b7931a5c05b3334d1586b59ee" Feb 18 01:30:20 crc kubenswrapper[4858]: I0218 01:30:20.352027 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rxg2j"] Feb 18 01:30:20 crc kubenswrapper[4858]: I0218 01:30:20.362966 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rxg2j"] Feb 18 01:30:20 crc kubenswrapper[4858]: I0218 01:30:20.364644 4858 scope.go:117] "RemoveContainer" containerID="79bb863b88fec4758fde424282641eed99d29e136c024f16d88937d489311ce8" Feb 18 01:30:20 crc kubenswrapper[4858]: I0218 01:30:20.405747 4858 scope.go:117] "RemoveContainer" containerID="97608c317e393d58130757ebcf27fb24335a5d12f188d8e2dcdf675479708faa" Feb 18 01:30:20 crc kubenswrapper[4858]: E0218 01:30:20.406297 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97608c317e393d58130757ebcf27fb24335a5d12f188d8e2dcdf675479708faa\": container with ID starting with 97608c317e393d58130757ebcf27fb24335a5d12f188d8e2dcdf675479708faa not found: ID does not exist" containerID="97608c317e393d58130757ebcf27fb24335a5d12f188d8e2dcdf675479708faa" Feb 18 01:30:20 crc kubenswrapper[4858]: I0218 01:30:20.406346 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97608c317e393d58130757ebcf27fb24335a5d12f188d8e2dcdf675479708faa"} err="failed to get container status \"97608c317e393d58130757ebcf27fb24335a5d12f188d8e2dcdf675479708faa\": rpc error: code = NotFound desc = could not find container \"97608c317e393d58130757ebcf27fb24335a5d12f188d8e2dcdf675479708faa\": container with ID starting with 97608c317e393d58130757ebcf27fb24335a5d12f188d8e2dcdf675479708faa not found: ID does not exist" Feb 18 01:30:20 crc kubenswrapper[4858]: I0218 01:30:20.406376 4858 scope.go:117] "RemoveContainer" containerID="358463a59095714137af37b5db2da707650cd07b7931a5c05b3334d1586b59ee" Feb 18 01:30:20 crc kubenswrapper[4858]: E0218 01:30:20.406969 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"358463a59095714137af37b5db2da707650cd07b7931a5c05b3334d1586b59ee\": container with ID starting with 358463a59095714137af37b5db2da707650cd07b7931a5c05b3334d1586b59ee not found: ID does not exist" containerID="358463a59095714137af37b5db2da707650cd07b7931a5c05b3334d1586b59ee" Feb 18 01:30:20 crc kubenswrapper[4858]: I0218 01:30:20.407241 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"358463a59095714137af37b5db2da707650cd07b7931a5c05b3334d1586b59ee"} err="failed to get container status \"358463a59095714137af37b5db2da707650cd07b7931a5c05b3334d1586b59ee\": rpc error: code = NotFound desc = could not find container \"358463a59095714137af37b5db2da707650cd07b7931a5c05b3334d1586b59ee\": container with ID starting with 358463a59095714137af37b5db2da707650cd07b7931a5c05b3334d1586b59ee not found: ID does not exist" Feb 18 01:30:20 crc kubenswrapper[4858]: I0218 01:30:20.407407 4858 scope.go:117] "RemoveContainer" containerID="79bb863b88fec4758fde424282641eed99d29e136c024f16d88937d489311ce8" Feb 18 01:30:20 crc kubenswrapper[4858]: E0218 01:30:20.408116 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79bb863b88fec4758fde424282641eed99d29e136c024f16d88937d489311ce8\": container with ID starting with 79bb863b88fec4758fde424282641eed99d29e136c024f16d88937d489311ce8 not found: ID does not exist" containerID="79bb863b88fec4758fde424282641eed99d29e136c024f16d88937d489311ce8" Feb 18 01:30:20 crc kubenswrapper[4858]: I0218 01:30:20.408152 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79bb863b88fec4758fde424282641eed99d29e136c024f16d88937d489311ce8"} err="failed to get container status \"79bb863b88fec4758fde424282641eed99d29e136c024f16d88937d489311ce8\": rpc error: code = NotFound desc = could not find container \"79bb863b88fec4758fde424282641eed99d29e136c024f16d88937d489311ce8\": container with ID starting with 79bb863b88fec4758fde424282641eed99d29e136c024f16d88937d489311ce8 not found: ID does not exist" Feb 18 01:30:21 crc kubenswrapper[4858]: I0218 01:30:21.436038 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45421c0b-fd5f-4652-97a7-e384cb3a2217" path="/var/lib/kubelet/pods/45421c0b-fd5f-4652-97a7-e384cb3a2217/volumes" Feb 18 01:30:24 crc kubenswrapper[4858]: E0218 01:30:24.423095 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:30:25 crc kubenswrapper[4858]: I0218 01:30:25.265403 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:30:25 crc kubenswrapper[4858]: I0218 01:30:25.265500 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:30:31 crc kubenswrapper[4858]: I0218 01:30:31.980066 4858 scope.go:117] "RemoveContainer" containerID="9393af52b93b741680066896084ffc0ce4c793f8f694265cb9ebb37ca506d732" Feb 18 01:30:32 crc kubenswrapper[4858]: E0218 01:30:32.421001 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:30:35 crc kubenswrapper[4858]: E0218 01:30:35.422444 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:30:45 crc kubenswrapper[4858]: E0218 01:30:45.422315 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:30:47 crc kubenswrapper[4858]: E0218 01:30:47.436209 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:30:55 crc kubenswrapper[4858]: I0218 01:30:55.264882 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:30:55 crc kubenswrapper[4858]: I0218 01:30:55.265434 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:30:59 crc kubenswrapper[4858]: E0218 01:30:59.424021 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:31:02 crc kubenswrapper[4858]: E0218 01:31:02.422997 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:31:12 crc kubenswrapper[4858]: E0218 01:31:12.422074 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:31:17 crc kubenswrapper[4858]: E0218 01:31:17.429049 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:31:25 crc kubenswrapper[4858]: I0218 01:31:25.265648 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:31:25 crc kubenswrapper[4858]: I0218 01:31:25.267787 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:31:25 crc kubenswrapper[4858]: I0218 01:31:25.268028 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 01:31:25 crc kubenswrapper[4858]: I0218 01:31:25.269233 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e391c93a1a146aadedb1ac36b6c2e49907bc1ff8030357e7dec22026586763db"} pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 01:31:25 crc kubenswrapper[4858]: I0218 01:31:25.269499 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" containerID="cri-o://e391c93a1a146aadedb1ac36b6c2e49907bc1ff8030357e7dec22026586763db" gracePeriod=600 Feb 18 01:31:26 crc kubenswrapper[4858]: I0218 01:31:26.035012 4858 generic.go:334] "Generic (PLEG): container finished" podID="7172df49-6116-4968-a2b5-a1afb116568b" containerID="e391c93a1a146aadedb1ac36b6c2e49907bc1ff8030357e7dec22026586763db" exitCode=0 Feb 18 01:31:26 crc kubenswrapper[4858]: I0218 01:31:26.035127 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerDied","Data":"e391c93a1a146aadedb1ac36b6c2e49907bc1ff8030357e7dec22026586763db"} Feb 18 01:31:26 crc kubenswrapper[4858]: I0218 01:31:26.035505 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerStarted","Data":"307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7"} Feb 18 01:31:26 crc kubenswrapper[4858]: I0218 01:31:26.035549 4858 scope.go:117] "RemoveContainer" containerID="e5af647c6e3b64c4b8990c3db4013e6cfcee7c4c7aeeb637553403e83643b664" Feb 18 01:31:26 crc kubenswrapper[4858]: E0218 01:31:26.420294 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:31:29 crc kubenswrapper[4858]: E0218 01:31:29.427905 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:31:39 crc kubenswrapper[4858]: E0218 01:31:39.422000 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:31:42 crc kubenswrapper[4858]: E0218 01:31:42.421675 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:31:51 crc kubenswrapper[4858]: E0218 01:31:51.422390 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:31:53 crc kubenswrapper[4858]: E0218 01:31:53.421645 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:32:02 crc kubenswrapper[4858]: E0218 01:32:02.423701 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:32:03 crc kubenswrapper[4858]: I0218 01:32:03.585751 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-87mlp"] Feb 18 01:32:03 crc kubenswrapper[4858]: E0218 01:32:03.587025 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45421c0b-fd5f-4652-97a7-e384cb3a2217" containerName="extract-utilities" Feb 18 01:32:03 crc kubenswrapper[4858]: I0218 01:32:03.587060 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="45421c0b-fd5f-4652-97a7-e384cb3a2217" containerName="extract-utilities" Feb 18 01:32:03 crc kubenswrapper[4858]: E0218 01:32:03.587101 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45421c0b-fd5f-4652-97a7-e384cb3a2217" containerName="registry-server" Feb 18 01:32:03 crc kubenswrapper[4858]: I0218 01:32:03.587119 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="45421c0b-fd5f-4652-97a7-e384cb3a2217" containerName="registry-server" Feb 18 01:32:03 crc kubenswrapper[4858]: E0218 01:32:03.587187 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45421c0b-fd5f-4652-97a7-e384cb3a2217" containerName="extract-content" Feb 18 01:32:03 crc kubenswrapper[4858]: I0218 01:32:03.587204 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="45421c0b-fd5f-4652-97a7-e384cb3a2217" containerName="extract-content" Feb 18 01:32:03 crc kubenswrapper[4858]: I0218 01:32:03.587715 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="45421c0b-fd5f-4652-97a7-e384cb3a2217" containerName="registry-server" Feb 18 01:32:03 crc kubenswrapper[4858]: I0218 01:32:03.590662 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-87mlp" Feb 18 01:32:03 crc kubenswrapper[4858]: I0218 01:32:03.601860 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-87mlp"] Feb 18 01:32:03 crc kubenswrapper[4858]: I0218 01:32:03.644239 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p4xl\" (UniqueName: \"kubernetes.io/projected/2e5292a8-45d1-4dfd-8054-7372cdd77b09-kube-api-access-8p4xl\") pod \"community-operators-87mlp\" (UID: \"2e5292a8-45d1-4dfd-8054-7372cdd77b09\") " pod="openshift-marketplace/community-operators-87mlp" Feb 18 01:32:03 crc kubenswrapper[4858]: I0218 01:32:03.644315 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e5292a8-45d1-4dfd-8054-7372cdd77b09-utilities\") pod \"community-operators-87mlp\" (UID: \"2e5292a8-45d1-4dfd-8054-7372cdd77b09\") " pod="openshift-marketplace/community-operators-87mlp" Feb 18 01:32:03 crc kubenswrapper[4858]: I0218 01:32:03.644361 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e5292a8-45d1-4dfd-8054-7372cdd77b09-catalog-content\") pod \"community-operators-87mlp\" (UID: \"2e5292a8-45d1-4dfd-8054-7372cdd77b09\") " pod="openshift-marketplace/community-operators-87mlp" Feb 18 01:32:03 crc kubenswrapper[4858]: I0218 01:32:03.746093 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e5292a8-45d1-4dfd-8054-7372cdd77b09-catalog-content\") pod \"community-operators-87mlp\" (UID: \"2e5292a8-45d1-4dfd-8054-7372cdd77b09\") " pod="openshift-marketplace/community-operators-87mlp" Feb 18 01:32:03 crc kubenswrapper[4858]: I0218 01:32:03.746343 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p4xl\" (UniqueName: \"kubernetes.io/projected/2e5292a8-45d1-4dfd-8054-7372cdd77b09-kube-api-access-8p4xl\") pod \"community-operators-87mlp\" (UID: \"2e5292a8-45d1-4dfd-8054-7372cdd77b09\") " pod="openshift-marketplace/community-operators-87mlp" Feb 18 01:32:03 crc kubenswrapper[4858]: I0218 01:32:03.746409 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e5292a8-45d1-4dfd-8054-7372cdd77b09-utilities\") pod \"community-operators-87mlp\" (UID: \"2e5292a8-45d1-4dfd-8054-7372cdd77b09\") " pod="openshift-marketplace/community-operators-87mlp" Feb 18 01:32:03 crc kubenswrapper[4858]: I0218 01:32:03.747033 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e5292a8-45d1-4dfd-8054-7372cdd77b09-utilities\") pod \"community-operators-87mlp\" (UID: \"2e5292a8-45d1-4dfd-8054-7372cdd77b09\") " pod="openshift-marketplace/community-operators-87mlp" Feb 18 01:32:03 crc kubenswrapper[4858]: I0218 01:32:03.747452 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e5292a8-45d1-4dfd-8054-7372cdd77b09-catalog-content\") pod \"community-operators-87mlp\" (UID: \"2e5292a8-45d1-4dfd-8054-7372cdd77b09\") " pod="openshift-marketplace/community-operators-87mlp" Feb 18 01:32:03 crc kubenswrapper[4858]: I0218 01:32:03.773427 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p4xl\" (UniqueName: \"kubernetes.io/projected/2e5292a8-45d1-4dfd-8054-7372cdd77b09-kube-api-access-8p4xl\") pod \"community-operators-87mlp\" (UID: \"2e5292a8-45d1-4dfd-8054-7372cdd77b09\") " pod="openshift-marketplace/community-operators-87mlp" Feb 18 01:32:03 crc kubenswrapper[4858]: I0218 01:32:03.923140 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-87mlp" Feb 18 01:32:04 crc kubenswrapper[4858]: I0218 01:32:04.512910 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-87mlp"] Feb 18 01:32:04 crc kubenswrapper[4858]: W0218 01:32:04.516279 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e5292a8_45d1_4dfd_8054_7372cdd77b09.slice/crio-fa5e2a35c76c593e66e402bd2e7e42f8031f92d100c421e8ca818c6da9bf8c2d WatchSource:0}: Error finding container fa5e2a35c76c593e66e402bd2e7e42f8031f92d100c421e8ca818c6da9bf8c2d: Status 404 returned error can't find the container with id fa5e2a35c76c593e66e402bd2e7e42f8031f92d100c421e8ca818c6da9bf8c2d Feb 18 01:32:05 crc kubenswrapper[4858]: I0218 01:32:05.511884 4858 generic.go:334] "Generic (PLEG): container finished" podID="2e5292a8-45d1-4dfd-8054-7372cdd77b09" containerID="85eeff1d3d20d84804a0afe3710a1ed00eee43ef2faa96bc2ce7c046e7f6bfd7" exitCode=0 Feb 18 01:32:05 crc kubenswrapper[4858]: I0218 01:32:05.524076 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-87mlp" event={"ID":"2e5292a8-45d1-4dfd-8054-7372cdd77b09","Type":"ContainerDied","Data":"85eeff1d3d20d84804a0afe3710a1ed00eee43ef2faa96bc2ce7c046e7f6bfd7"} Feb 18 01:32:05 crc kubenswrapper[4858]: I0218 01:32:05.524129 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-87mlp" event={"ID":"2e5292a8-45d1-4dfd-8054-7372cdd77b09","Type":"ContainerStarted","Data":"fa5e2a35c76c593e66e402bd2e7e42f8031f92d100c421e8ca818c6da9bf8c2d"} Feb 18 01:32:06 crc kubenswrapper[4858]: E0218 01:32:06.420156 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:32:06 crc kubenswrapper[4858]: I0218 01:32:06.530292 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-87mlp" event={"ID":"2e5292a8-45d1-4dfd-8054-7372cdd77b09","Type":"ContainerStarted","Data":"bb0153f6e91af8847e49cb555ee4b608e6f6ab83c1843a9007bd4fa258e221fc"} Feb 18 01:32:08 crc kubenswrapper[4858]: I0218 01:32:08.552719 4858 generic.go:334] "Generic (PLEG): container finished" podID="2e5292a8-45d1-4dfd-8054-7372cdd77b09" containerID="bb0153f6e91af8847e49cb555ee4b608e6f6ab83c1843a9007bd4fa258e221fc" exitCode=0 Feb 18 01:32:08 crc kubenswrapper[4858]: I0218 01:32:08.552812 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-87mlp" event={"ID":"2e5292a8-45d1-4dfd-8054-7372cdd77b09","Type":"ContainerDied","Data":"bb0153f6e91af8847e49cb555ee4b608e6f6ab83c1843a9007bd4fa258e221fc"} Feb 18 01:32:09 crc kubenswrapper[4858]: I0218 01:32:09.565552 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-87mlp" event={"ID":"2e5292a8-45d1-4dfd-8054-7372cdd77b09","Type":"ContainerStarted","Data":"4b400a9bb81e2ef861fa1893c10ae3cd11857ac0a9ba06d4038972bd291e121b"} Feb 18 01:32:09 crc kubenswrapper[4858]: I0218 01:32:09.597938 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-87mlp" podStartSLOduration=3.185376811 podStartE2EDuration="6.597919806s" podCreationTimestamp="2026-02-18 01:32:03 +0000 UTC" firstStartedPulling="2026-02-18 01:32:05.515096371 +0000 UTC m=+3478.820933093" lastFinishedPulling="2026-02-18 01:32:08.927639356 +0000 UTC m=+3482.233476088" observedRunningTime="2026-02-18 01:32:09.591730008 +0000 UTC m=+3482.897566770" watchObservedRunningTime="2026-02-18 01:32:09.597919806 +0000 UTC m=+3482.903756538" Feb 18 01:32:13 crc kubenswrapper[4858]: I0218 01:32:13.923966 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-87mlp" Feb 18 01:32:13 crc kubenswrapper[4858]: I0218 01:32:13.924857 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-87mlp" Feb 18 01:32:14 crc kubenswrapper[4858]: I0218 01:32:14.016203 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-87mlp" Feb 18 01:32:14 crc kubenswrapper[4858]: I0218 01:32:14.694911 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-87mlp" Feb 18 01:32:14 crc kubenswrapper[4858]: I0218 01:32:14.750869 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-87mlp"] Feb 18 01:32:16 crc kubenswrapper[4858]: I0218 01:32:16.651997 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-87mlp" podUID="2e5292a8-45d1-4dfd-8054-7372cdd77b09" containerName="registry-server" containerID="cri-o://4b400a9bb81e2ef861fa1893c10ae3cd11857ac0a9ba06d4038972bd291e121b" gracePeriod=2 Feb 18 01:32:17 crc kubenswrapper[4858]: I0218 01:32:17.292035 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-87mlp" Feb 18 01:32:17 crc kubenswrapper[4858]: E0218 01:32:17.430186 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:32:17 crc kubenswrapper[4858]: I0218 01:32:17.493678 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e5292a8-45d1-4dfd-8054-7372cdd77b09-catalog-content\") pod \"2e5292a8-45d1-4dfd-8054-7372cdd77b09\" (UID: \"2e5292a8-45d1-4dfd-8054-7372cdd77b09\") " Feb 18 01:32:17 crc kubenswrapper[4858]: I0218 01:32:17.493777 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p4xl\" (UniqueName: \"kubernetes.io/projected/2e5292a8-45d1-4dfd-8054-7372cdd77b09-kube-api-access-8p4xl\") pod \"2e5292a8-45d1-4dfd-8054-7372cdd77b09\" (UID: \"2e5292a8-45d1-4dfd-8054-7372cdd77b09\") " Feb 18 01:32:17 crc kubenswrapper[4858]: I0218 01:32:17.493894 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e5292a8-45d1-4dfd-8054-7372cdd77b09-utilities\") pod \"2e5292a8-45d1-4dfd-8054-7372cdd77b09\" (UID: \"2e5292a8-45d1-4dfd-8054-7372cdd77b09\") " Feb 18 01:32:17 crc kubenswrapper[4858]: I0218 01:32:17.495291 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e5292a8-45d1-4dfd-8054-7372cdd77b09-utilities" (OuterVolumeSpecName: "utilities") pod "2e5292a8-45d1-4dfd-8054-7372cdd77b09" (UID: "2e5292a8-45d1-4dfd-8054-7372cdd77b09"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:32:17 crc kubenswrapper[4858]: I0218 01:32:17.509918 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e5292a8-45d1-4dfd-8054-7372cdd77b09-kube-api-access-8p4xl" (OuterVolumeSpecName: "kube-api-access-8p4xl") pod "2e5292a8-45d1-4dfd-8054-7372cdd77b09" (UID: "2e5292a8-45d1-4dfd-8054-7372cdd77b09"). InnerVolumeSpecName "kube-api-access-8p4xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:32:17 crc kubenswrapper[4858]: I0218 01:32:17.544590 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e5292a8-45d1-4dfd-8054-7372cdd77b09-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e5292a8-45d1-4dfd-8054-7372cdd77b09" (UID: "2e5292a8-45d1-4dfd-8054-7372cdd77b09"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:32:17 crc kubenswrapper[4858]: I0218 01:32:17.597353 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e5292a8-45d1-4dfd-8054-7372cdd77b09-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:32:17 crc kubenswrapper[4858]: I0218 01:32:17.597383 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p4xl\" (UniqueName: \"kubernetes.io/projected/2e5292a8-45d1-4dfd-8054-7372cdd77b09-kube-api-access-8p4xl\") on node \"crc\" DevicePath \"\"" Feb 18 01:32:17 crc kubenswrapper[4858]: I0218 01:32:17.597392 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e5292a8-45d1-4dfd-8054-7372cdd77b09-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:32:17 crc kubenswrapper[4858]: I0218 01:32:17.665454 4858 generic.go:334] "Generic (PLEG): container finished" podID="2e5292a8-45d1-4dfd-8054-7372cdd77b09" containerID="4b400a9bb81e2ef861fa1893c10ae3cd11857ac0a9ba06d4038972bd291e121b" exitCode=0 Feb 18 01:32:17 crc kubenswrapper[4858]: I0218 01:32:17.665519 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-87mlp" event={"ID":"2e5292a8-45d1-4dfd-8054-7372cdd77b09","Type":"ContainerDied","Data":"4b400a9bb81e2ef861fa1893c10ae3cd11857ac0a9ba06d4038972bd291e121b"} Feb 18 01:32:17 crc kubenswrapper[4858]: I0218 01:32:17.665555 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-87mlp" event={"ID":"2e5292a8-45d1-4dfd-8054-7372cdd77b09","Type":"ContainerDied","Data":"fa5e2a35c76c593e66e402bd2e7e42f8031f92d100c421e8ca818c6da9bf8c2d"} Feb 18 01:32:17 crc kubenswrapper[4858]: I0218 01:32:17.665572 4858 scope.go:117] "RemoveContainer" containerID="4b400a9bb81e2ef861fa1893c10ae3cd11857ac0a9ba06d4038972bd291e121b" Feb 18 01:32:17 crc kubenswrapper[4858]: I0218 01:32:17.665604 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-87mlp" Feb 18 01:32:17 crc kubenswrapper[4858]: I0218 01:32:17.695373 4858 scope.go:117] "RemoveContainer" containerID="bb0153f6e91af8847e49cb555ee4b608e6f6ab83c1843a9007bd4fa258e221fc" Feb 18 01:32:17 crc kubenswrapper[4858]: I0218 01:32:17.732051 4858 scope.go:117] "RemoveContainer" containerID="85eeff1d3d20d84804a0afe3710a1ed00eee43ef2faa96bc2ce7c046e7f6bfd7" Feb 18 01:32:17 crc kubenswrapper[4858]: I0218 01:32:17.744825 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-87mlp"] Feb 18 01:32:17 crc kubenswrapper[4858]: I0218 01:32:17.756095 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-87mlp"] Feb 18 01:32:17 crc kubenswrapper[4858]: I0218 01:32:17.781573 4858 scope.go:117] "RemoveContainer" containerID="4b400a9bb81e2ef861fa1893c10ae3cd11857ac0a9ba06d4038972bd291e121b" Feb 18 01:32:17 crc kubenswrapper[4858]: E0218 01:32:17.782099 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b400a9bb81e2ef861fa1893c10ae3cd11857ac0a9ba06d4038972bd291e121b\": container with ID starting with 4b400a9bb81e2ef861fa1893c10ae3cd11857ac0a9ba06d4038972bd291e121b not found: ID does not exist" containerID="4b400a9bb81e2ef861fa1893c10ae3cd11857ac0a9ba06d4038972bd291e121b" Feb 18 01:32:17 crc kubenswrapper[4858]: I0218 01:32:17.782135 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b400a9bb81e2ef861fa1893c10ae3cd11857ac0a9ba06d4038972bd291e121b"} err="failed to get container status \"4b400a9bb81e2ef861fa1893c10ae3cd11857ac0a9ba06d4038972bd291e121b\": rpc error: code = NotFound desc = could not find container \"4b400a9bb81e2ef861fa1893c10ae3cd11857ac0a9ba06d4038972bd291e121b\": container with ID starting with 4b400a9bb81e2ef861fa1893c10ae3cd11857ac0a9ba06d4038972bd291e121b not found: ID does not exist" Feb 18 01:32:17 crc kubenswrapper[4858]: I0218 01:32:17.782163 4858 scope.go:117] "RemoveContainer" containerID="bb0153f6e91af8847e49cb555ee4b608e6f6ab83c1843a9007bd4fa258e221fc" Feb 18 01:32:17 crc kubenswrapper[4858]: E0218 01:32:17.782654 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb0153f6e91af8847e49cb555ee4b608e6f6ab83c1843a9007bd4fa258e221fc\": container with ID starting with bb0153f6e91af8847e49cb555ee4b608e6f6ab83c1843a9007bd4fa258e221fc not found: ID does not exist" containerID="bb0153f6e91af8847e49cb555ee4b608e6f6ab83c1843a9007bd4fa258e221fc" Feb 18 01:32:17 crc kubenswrapper[4858]: I0218 01:32:17.782705 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb0153f6e91af8847e49cb555ee4b608e6f6ab83c1843a9007bd4fa258e221fc"} err="failed to get container status \"bb0153f6e91af8847e49cb555ee4b608e6f6ab83c1843a9007bd4fa258e221fc\": rpc error: code = NotFound desc = could not find container \"bb0153f6e91af8847e49cb555ee4b608e6f6ab83c1843a9007bd4fa258e221fc\": container with ID starting with bb0153f6e91af8847e49cb555ee4b608e6f6ab83c1843a9007bd4fa258e221fc not found: ID does not exist" Feb 18 01:32:17 crc kubenswrapper[4858]: I0218 01:32:17.782740 4858 scope.go:117] "RemoveContainer" containerID="85eeff1d3d20d84804a0afe3710a1ed00eee43ef2faa96bc2ce7c046e7f6bfd7" Feb 18 01:32:17 crc kubenswrapper[4858]: E0218 01:32:17.783059 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85eeff1d3d20d84804a0afe3710a1ed00eee43ef2faa96bc2ce7c046e7f6bfd7\": container with ID starting with 85eeff1d3d20d84804a0afe3710a1ed00eee43ef2faa96bc2ce7c046e7f6bfd7 not found: ID does not exist" containerID="85eeff1d3d20d84804a0afe3710a1ed00eee43ef2faa96bc2ce7c046e7f6bfd7" Feb 18 01:32:17 crc kubenswrapper[4858]: I0218 01:32:17.783143 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85eeff1d3d20d84804a0afe3710a1ed00eee43ef2faa96bc2ce7c046e7f6bfd7"} err="failed to get container status \"85eeff1d3d20d84804a0afe3710a1ed00eee43ef2faa96bc2ce7c046e7f6bfd7\": rpc error: code = NotFound desc = could not find container \"85eeff1d3d20d84804a0afe3710a1ed00eee43ef2faa96bc2ce7c046e7f6bfd7\": container with ID starting with 85eeff1d3d20d84804a0afe3710a1ed00eee43ef2faa96bc2ce7c046e7f6bfd7 not found: ID does not exist" Feb 18 01:32:19 crc kubenswrapper[4858]: E0218 01:32:19.423199 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:32:19 crc kubenswrapper[4858]: I0218 01:32:19.438515 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e5292a8-45d1-4dfd-8054-7372cdd77b09" path="/var/lib/kubelet/pods/2e5292a8-45d1-4dfd-8054-7372cdd77b09/volumes" Feb 18 01:32:29 crc kubenswrapper[4858]: E0218 01:32:29.421845 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:32:33 crc kubenswrapper[4858]: E0218 01:32:33.422427 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:32:42 crc kubenswrapper[4858]: E0218 01:32:42.422625 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:32:47 crc kubenswrapper[4858]: E0218 01:32:47.427861 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:32:56 crc kubenswrapper[4858]: E0218 01:32:56.422686 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:32:59 crc kubenswrapper[4858]: E0218 01:32:59.422380 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:33:11 crc kubenswrapper[4858]: E0218 01:33:11.420898 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:33:12 crc kubenswrapper[4858]: E0218 01:33:12.421635 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:33:23 crc kubenswrapper[4858]: I0218 01:33:23.424399 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 01:33:23 crc kubenswrapper[4858]: E0218 01:33:23.553421 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 01:33:23 crc kubenswrapper[4858]: E0218 01:33:23.553491 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 01:33:23 crc kubenswrapper[4858]: E0218 01:33:23.553651 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t22t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-h2mps_openstack(8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:33:23 crc kubenswrapper[4858]: E0218 01:33:23.554896 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:33:25 crc kubenswrapper[4858]: I0218 01:33:25.265547 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:33:25 crc kubenswrapper[4858]: I0218 01:33:25.265854 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:33:26 crc kubenswrapper[4858]: E0218 01:33:26.422179 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:33:35 crc kubenswrapper[4858]: E0218 01:33:35.421815 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:33:39 crc kubenswrapper[4858]: E0218 01:33:39.553869 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:33:39 crc kubenswrapper[4858]: E0218 01:33:39.554588 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:33:39 crc kubenswrapper[4858]: E0218 01:33:39.554802 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n75h65dh56ch8chbh597h687h696h57bhdchcbhb6h9h648hb6h686h666h57ch557h55ch68ch76h686h5f7hb7hc6h68hdh67fh8bhd7hbcq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6qb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(1b28954c-8d35-4f43-a44b-307a56f6fff5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:33:39 crc kubenswrapper[4858]: E0218 01:33:39.555975 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:33:49 crc kubenswrapper[4858]: E0218 01:33:49.423237 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:33:51 crc kubenswrapper[4858]: E0218 01:33:51.423443 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:33:55 crc kubenswrapper[4858]: I0218 01:33:55.264960 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:33:55 crc kubenswrapper[4858]: I0218 01:33:55.265316 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:34:02 crc kubenswrapper[4858]: E0218 01:34:02.423956 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:34:06 crc kubenswrapper[4858]: E0218 01:34:06.421915 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:34:17 crc kubenswrapper[4858]: E0218 01:34:17.437594 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:34:18 crc kubenswrapper[4858]: E0218 01:34:18.421288 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:34:25 crc kubenswrapper[4858]: I0218 01:34:25.265531 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:34:25 crc kubenswrapper[4858]: I0218 01:34:25.266048 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:34:25 crc kubenswrapper[4858]: I0218 01:34:25.266096 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 01:34:25 crc kubenswrapper[4858]: I0218 01:34:25.266706 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7"} pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 01:34:25 crc kubenswrapper[4858]: I0218 01:34:25.266771 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" containerID="cri-o://307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" gracePeriod=600 Feb 18 01:34:25 crc kubenswrapper[4858]: E0218 01:34:25.395075 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:34:26 crc kubenswrapper[4858]: I0218 01:34:26.126648 4858 generic.go:334] "Generic (PLEG): container finished" podID="7172df49-6116-4968-a2b5-a1afb116568b" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" exitCode=0 Feb 18 01:34:26 crc kubenswrapper[4858]: I0218 01:34:26.126713 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerDied","Data":"307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7"} Feb 18 01:34:26 crc kubenswrapper[4858]: I0218 01:34:26.126756 4858 scope.go:117] "RemoveContainer" containerID="e391c93a1a146aadedb1ac36b6c2e49907bc1ff8030357e7dec22026586763db" Feb 18 01:34:26 crc kubenswrapper[4858]: I0218 01:34:26.127676 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:34:26 crc kubenswrapper[4858]: E0218 01:34:26.128226 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:34:28 crc kubenswrapper[4858]: E0218 01:34:28.424526 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:34:31 crc kubenswrapper[4858]: E0218 01:34:31.421935 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:34:39 crc kubenswrapper[4858]: I0218 01:34:39.420309 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:34:39 crc kubenswrapper[4858]: E0218 01:34:39.421628 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:34:40 crc kubenswrapper[4858]: E0218 01:34:40.422569 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:34:43 crc kubenswrapper[4858]: E0218 01:34:43.422822 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:34:53 crc kubenswrapper[4858]: I0218 01:34:53.420674 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:34:53 crc kubenswrapper[4858]: E0218 01:34:53.421805 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:34:54 crc kubenswrapper[4858]: E0218 01:34:54.422219 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:34:58 crc kubenswrapper[4858]: E0218 01:34:58.422193 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:35:05 crc kubenswrapper[4858]: E0218 01:35:05.424183 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:35:08 crc kubenswrapper[4858]: I0218 01:35:08.419888 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:35:08 crc kubenswrapper[4858]: E0218 01:35:08.420734 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:35:10 crc kubenswrapper[4858]: E0218 01:35:10.422761 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:35:16 crc kubenswrapper[4858]: E0218 01:35:16.421770 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:35:19 crc kubenswrapper[4858]: I0218 01:35:19.420778 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:35:19 crc kubenswrapper[4858]: E0218 01:35:19.422105 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:35:25 crc kubenswrapper[4858]: E0218 01:35:25.423374 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:35:28 crc kubenswrapper[4858]: E0218 01:35:28.434852 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:35:34 crc kubenswrapper[4858]: I0218 01:35:34.420233 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:35:34 crc kubenswrapper[4858]: E0218 01:35:34.421341 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:35:38 crc kubenswrapper[4858]: E0218 01:35:38.422083 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:35:43 crc kubenswrapper[4858]: E0218 01:35:43.422394 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:35:47 crc kubenswrapper[4858]: I0218 01:35:47.425488 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:35:47 crc kubenswrapper[4858]: E0218 01:35:47.426132 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:35:53 crc kubenswrapper[4858]: E0218 01:35:53.422235 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:35:57 crc kubenswrapper[4858]: E0218 01:35:57.436614 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:36:00 crc kubenswrapper[4858]: I0218 01:36:00.420293 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:36:00 crc kubenswrapper[4858]: E0218 01:36:00.421534 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:36:01 crc kubenswrapper[4858]: I0218 01:36:01.175051 4858 generic.go:334] "Generic (PLEG): container finished" podID="bcd6a468-3c13-4a07-af88-b78f12b9de4f" containerID="73b882c94b04d6d992490f29a6814eccb6d6809ff187450de4fb74e76aa0554b" exitCode=2 Feb 18 01:36:01 crc kubenswrapper[4858]: I0218 01:36:01.175105 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v55bj" event={"ID":"bcd6a468-3c13-4a07-af88-b78f12b9de4f","Type":"ContainerDied","Data":"73b882c94b04d6d992490f29a6814eccb6d6809ff187450de4fb74e76aa0554b"} Feb 18 01:36:02 crc kubenswrapper[4858]: I0218 01:36:02.755921 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v55bj" Feb 18 01:36:02 crc kubenswrapper[4858]: I0218 01:36:02.784255 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bcd6a468-3c13-4a07-af88-b78f12b9de4f-ssh-key-openstack-edpm-ipam\") pod \"bcd6a468-3c13-4a07-af88-b78f12b9de4f\" (UID: \"bcd6a468-3c13-4a07-af88-b78f12b9de4f\") " Feb 18 01:36:02 crc kubenswrapper[4858]: I0218 01:36:02.784429 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bcd6a468-3c13-4a07-af88-b78f12b9de4f-inventory\") pod \"bcd6a468-3c13-4a07-af88-b78f12b9de4f\" (UID: \"bcd6a468-3c13-4a07-af88-b78f12b9de4f\") " Feb 18 01:36:02 crc kubenswrapper[4858]: I0218 01:36:02.785273 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t97xr\" (UniqueName: \"kubernetes.io/projected/bcd6a468-3c13-4a07-af88-b78f12b9de4f-kube-api-access-t97xr\") pod \"bcd6a468-3c13-4a07-af88-b78f12b9de4f\" (UID: \"bcd6a468-3c13-4a07-af88-b78f12b9de4f\") " Feb 18 01:36:02 crc kubenswrapper[4858]: I0218 01:36:02.805873 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcd6a468-3c13-4a07-af88-b78f12b9de4f-kube-api-access-t97xr" (OuterVolumeSpecName: "kube-api-access-t97xr") pod "bcd6a468-3c13-4a07-af88-b78f12b9de4f" (UID: "bcd6a468-3c13-4a07-af88-b78f12b9de4f"). InnerVolumeSpecName "kube-api-access-t97xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:36:02 crc kubenswrapper[4858]: I0218 01:36:02.832800 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcd6a468-3c13-4a07-af88-b78f12b9de4f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bcd6a468-3c13-4a07-af88-b78f12b9de4f" (UID: "bcd6a468-3c13-4a07-af88-b78f12b9de4f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:36:02 crc kubenswrapper[4858]: I0218 01:36:02.841876 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcd6a468-3c13-4a07-af88-b78f12b9de4f-inventory" (OuterVolumeSpecName: "inventory") pod "bcd6a468-3c13-4a07-af88-b78f12b9de4f" (UID: "bcd6a468-3c13-4a07-af88-b78f12b9de4f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:36:02 crc kubenswrapper[4858]: I0218 01:36:02.888580 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t97xr\" (UniqueName: \"kubernetes.io/projected/bcd6a468-3c13-4a07-af88-b78f12b9de4f-kube-api-access-t97xr\") on node \"crc\" DevicePath \"\"" Feb 18 01:36:02 crc kubenswrapper[4858]: I0218 01:36:02.888626 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bcd6a468-3c13-4a07-af88-b78f12b9de4f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 01:36:02 crc kubenswrapper[4858]: I0218 01:36:02.888640 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bcd6a468-3c13-4a07-af88-b78f12b9de4f-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 01:36:03 crc kubenswrapper[4858]: I0218 01:36:03.199062 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v55bj" event={"ID":"bcd6a468-3c13-4a07-af88-b78f12b9de4f","Type":"ContainerDied","Data":"7f6cc97744d1c159cf30c555ee8673b93a39213183d2dc1ac5ca2d90f849bc23"} Feb 18 01:36:03 crc kubenswrapper[4858]: I0218 01:36:03.199127 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f6cc97744d1c159cf30c555ee8673b93a39213183d2dc1ac5ca2d90f849bc23" Feb 18 01:36:03 crc kubenswrapper[4858]: I0218 01:36:03.199138 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v55bj" Feb 18 01:36:04 crc kubenswrapper[4858]: E0218 01:36:04.421826 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:36:09 crc kubenswrapper[4858]: E0218 01:36:09.422814 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:36:11 crc kubenswrapper[4858]: I0218 01:36:11.419841 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:36:11 crc kubenswrapper[4858]: E0218 01:36:11.420801 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:36:15 crc kubenswrapper[4858]: E0218 01:36:15.422858 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:36:21 crc kubenswrapper[4858]: E0218 01:36:21.422928 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:36:23 crc kubenswrapper[4858]: I0218 01:36:23.419997 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:36:23 crc kubenswrapper[4858]: E0218 01:36:23.420893 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:36:30 crc kubenswrapper[4858]: E0218 01:36:30.421292 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:36:34 crc kubenswrapper[4858]: I0218 01:36:34.420586 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:36:34 crc kubenswrapper[4858]: E0218 01:36:34.421423 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:36:35 crc kubenswrapper[4858]: E0218 01:36:35.423609 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:36:42 crc kubenswrapper[4858]: E0218 01:36:42.431926 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:36:47 crc kubenswrapper[4858]: I0218 01:36:47.426760 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:36:47 crc kubenswrapper[4858]: E0218 01:36:47.427443 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:36:47 crc kubenswrapper[4858]: E0218 01:36:47.430245 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:36:48 crc kubenswrapper[4858]: I0218 01:36:48.277173 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8bk29"] Feb 18 01:36:48 crc kubenswrapper[4858]: E0218 01:36:48.278042 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e5292a8-45d1-4dfd-8054-7372cdd77b09" containerName="extract-utilities" Feb 18 01:36:48 crc kubenswrapper[4858]: I0218 01:36:48.278066 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e5292a8-45d1-4dfd-8054-7372cdd77b09" containerName="extract-utilities" Feb 18 01:36:48 crc kubenswrapper[4858]: E0218 01:36:48.278083 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e5292a8-45d1-4dfd-8054-7372cdd77b09" containerName="registry-server" Feb 18 01:36:48 crc kubenswrapper[4858]: I0218 01:36:48.278091 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e5292a8-45d1-4dfd-8054-7372cdd77b09" containerName="registry-server" Feb 18 01:36:48 crc kubenswrapper[4858]: E0218 01:36:48.278121 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e5292a8-45d1-4dfd-8054-7372cdd77b09" containerName="extract-content" Feb 18 01:36:48 crc kubenswrapper[4858]: I0218 01:36:48.278129 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e5292a8-45d1-4dfd-8054-7372cdd77b09" containerName="extract-content" Feb 18 01:36:48 crc kubenswrapper[4858]: E0218 01:36:48.278142 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcd6a468-3c13-4a07-af88-b78f12b9de4f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 01:36:48 crc kubenswrapper[4858]: I0218 01:36:48.278152 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcd6a468-3c13-4a07-af88-b78f12b9de4f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 01:36:48 crc kubenswrapper[4858]: I0218 01:36:48.278404 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcd6a468-3c13-4a07-af88-b78f12b9de4f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 01:36:48 crc kubenswrapper[4858]: I0218 01:36:48.278429 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e5292a8-45d1-4dfd-8054-7372cdd77b09" containerName="registry-server" Feb 18 01:36:48 crc kubenswrapper[4858]: I0218 01:36:48.280349 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8bk29" Feb 18 01:36:48 crc kubenswrapper[4858]: I0218 01:36:48.290900 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bk29"] Feb 18 01:36:48 crc kubenswrapper[4858]: I0218 01:36:48.428353 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74b45c64-c4c8-4a6c-9a32-6a85534dd890-catalog-content\") pod \"redhat-marketplace-8bk29\" (UID: \"74b45c64-c4c8-4a6c-9a32-6a85534dd890\") " pod="openshift-marketplace/redhat-marketplace-8bk29" Feb 18 01:36:48 crc kubenswrapper[4858]: I0218 01:36:48.428630 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74b45c64-c4c8-4a6c-9a32-6a85534dd890-utilities\") pod \"redhat-marketplace-8bk29\" (UID: \"74b45c64-c4c8-4a6c-9a32-6a85534dd890\") " pod="openshift-marketplace/redhat-marketplace-8bk29" Feb 18 01:36:48 crc kubenswrapper[4858]: I0218 01:36:48.428706 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmczv\" (UniqueName: \"kubernetes.io/projected/74b45c64-c4c8-4a6c-9a32-6a85534dd890-kube-api-access-lmczv\") pod \"redhat-marketplace-8bk29\" (UID: \"74b45c64-c4c8-4a6c-9a32-6a85534dd890\") " pod="openshift-marketplace/redhat-marketplace-8bk29" Feb 18 01:36:48 crc kubenswrapper[4858]: I0218 01:36:48.530743 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74b45c64-c4c8-4a6c-9a32-6a85534dd890-catalog-content\") pod \"redhat-marketplace-8bk29\" (UID: \"74b45c64-c4c8-4a6c-9a32-6a85534dd890\") " pod="openshift-marketplace/redhat-marketplace-8bk29" Feb 18 01:36:48 crc kubenswrapper[4858]: I0218 01:36:48.530964 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74b45c64-c4c8-4a6c-9a32-6a85534dd890-utilities\") pod \"redhat-marketplace-8bk29\" (UID: \"74b45c64-c4c8-4a6c-9a32-6a85534dd890\") " pod="openshift-marketplace/redhat-marketplace-8bk29" Feb 18 01:36:48 crc kubenswrapper[4858]: I0218 01:36:48.531007 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmczv\" (UniqueName: \"kubernetes.io/projected/74b45c64-c4c8-4a6c-9a32-6a85534dd890-kube-api-access-lmczv\") pod \"redhat-marketplace-8bk29\" (UID: \"74b45c64-c4c8-4a6c-9a32-6a85534dd890\") " pod="openshift-marketplace/redhat-marketplace-8bk29" Feb 18 01:36:48 crc kubenswrapper[4858]: I0218 01:36:48.531949 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74b45c64-c4c8-4a6c-9a32-6a85534dd890-utilities\") pod \"redhat-marketplace-8bk29\" (UID: \"74b45c64-c4c8-4a6c-9a32-6a85534dd890\") " pod="openshift-marketplace/redhat-marketplace-8bk29" Feb 18 01:36:48 crc kubenswrapper[4858]: I0218 01:36:48.532020 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74b45c64-c4c8-4a6c-9a32-6a85534dd890-catalog-content\") pod \"redhat-marketplace-8bk29\" (UID: \"74b45c64-c4c8-4a6c-9a32-6a85534dd890\") " pod="openshift-marketplace/redhat-marketplace-8bk29" Feb 18 01:36:48 crc kubenswrapper[4858]: I0218 01:36:48.557362 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmczv\" (UniqueName: \"kubernetes.io/projected/74b45c64-c4c8-4a6c-9a32-6a85534dd890-kube-api-access-lmczv\") pod \"redhat-marketplace-8bk29\" (UID: \"74b45c64-c4c8-4a6c-9a32-6a85534dd890\") " pod="openshift-marketplace/redhat-marketplace-8bk29" Feb 18 01:36:48 crc kubenswrapper[4858]: I0218 01:36:48.636265 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8bk29" Feb 18 01:36:49 crc kubenswrapper[4858]: I0218 01:36:49.140143 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bk29"] Feb 18 01:36:49 crc kubenswrapper[4858]: I0218 01:36:49.720799 4858 generic.go:334] "Generic (PLEG): container finished" podID="74b45c64-c4c8-4a6c-9a32-6a85534dd890" containerID="f0d722c832a4091795049b1dbfe67e450fc9a7edd5a09d884b464e9f7a1b21b4" exitCode=0 Feb 18 01:36:49 crc kubenswrapper[4858]: I0218 01:36:49.720848 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bk29" event={"ID":"74b45c64-c4c8-4a6c-9a32-6a85534dd890","Type":"ContainerDied","Data":"f0d722c832a4091795049b1dbfe67e450fc9a7edd5a09d884b464e9f7a1b21b4"} Feb 18 01:36:49 crc kubenswrapper[4858]: I0218 01:36:49.721088 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bk29" event={"ID":"74b45c64-c4c8-4a6c-9a32-6a85534dd890","Type":"ContainerStarted","Data":"c71577d29fd2629946cf04a765ffdbcc52858418fa825a2a60151f31d2427469"} Feb 18 01:36:51 crc kubenswrapper[4858]: I0218 01:36:51.745841 4858 generic.go:334] "Generic (PLEG): container finished" podID="74b45c64-c4c8-4a6c-9a32-6a85534dd890" containerID="e76306583476a95c0a4dc1596d2bda7ab064d5d6cda4551fb1f4f230d0cdd654" exitCode=0 Feb 18 01:36:51 crc kubenswrapper[4858]: I0218 01:36:51.745887 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bk29" event={"ID":"74b45c64-c4c8-4a6c-9a32-6a85534dd890","Type":"ContainerDied","Data":"e76306583476a95c0a4dc1596d2bda7ab064d5d6cda4551fb1f4f230d0cdd654"} Feb 18 01:36:52 crc kubenswrapper[4858]: I0218 01:36:52.761533 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bk29" event={"ID":"74b45c64-c4c8-4a6c-9a32-6a85534dd890","Type":"ContainerStarted","Data":"d757c3d3642727c5dd7c5d0e09001fed8d2e67b989c0b6557a94e4d2582095f2"} Feb 18 01:36:52 crc kubenswrapper[4858]: I0218 01:36:52.791123 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8bk29" podStartSLOduration=2.350288075 podStartE2EDuration="4.791101285s" podCreationTimestamp="2026-02-18 01:36:48 +0000 UTC" firstStartedPulling="2026-02-18 01:36:49.723562149 +0000 UTC m=+3763.029398891" lastFinishedPulling="2026-02-18 01:36:52.164375339 +0000 UTC m=+3765.470212101" observedRunningTime="2026-02-18 01:36:52.780605893 +0000 UTC m=+3766.086442625" watchObservedRunningTime="2026-02-18 01:36:52.791101285 +0000 UTC m=+3766.096938027" Feb 18 01:36:57 crc kubenswrapper[4858]: E0218 01:36:57.432029 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:36:58 crc kubenswrapper[4858]: I0218 01:36:58.637598 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8bk29" Feb 18 01:36:58 crc kubenswrapper[4858]: I0218 01:36:58.637914 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8bk29" Feb 18 01:36:58 crc kubenswrapper[4858]: I0218 01:36:58.717182 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8bk29" Feb 18 01:36:58 crc kubenswrapper[4858]: I0218 01:36:58.880804 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8bk29" Feb 18 01:36:58 crc kubenswrapper[4858]: I0218 01:36:58.954657 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bk29"] Feb 18 01:36:59 crc kubenswrapper[4858]: I0218 01:36:59.419831 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:36:59 crc kubenswrapper[4858]: E0218 01:36:59.420445 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:37:00 crc kubenswrapper[4858]: E0218 01:37:00.421381 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:37:00 crc kubenswrapper[4858]: I0218 01:37:00.842926 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8bk29" podUID="74b45c64-c4c8-4a6c-9a32-6a85534dd890" containerName="registry-server" containerID="cri-o://d757c3d3642727c5dd7c5d0e09001fed8d2e67b989c0b6557a94e4d2582095f2" gracePeriod=2 Feb 18 01:37:01 crc kubenswrapper[4858]: I0218 01:37:01.422486 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8bk29" Feb 18 01:37:01 crc kubenswrapper[4858]: I0218 01:37:01.550824 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmczv\" (UniqueName: \"kubernetes.io/projected/74b45c64-c4c8-4a6c-9a32-6a85534dd890-kube-api-access-lmczv\") pod \"74b45c64-c4c8-4a6c-9a32-6a85534dd890\" (UID: \"74b45c64-c4c8-4a6c-9a32-6a85534dd890\") " Feb 18 01:37:01 crc kubenswrapper[4858]: I0218 01:37:01.551011 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74b45c64-c4c8-4a6c-9a32-6a85534dd890-catalog-content\") pod \"74b45c64-c4c8-4a6c-9a32-6a85534dd890\" (UID: \"74b45c64-c4c8-4a6c-9a32-6a85534dd890\") " Feb 18 01:37:01 crc kubenswrapper[4858]: I0218 01:37:01.551226 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74b45c64-c4c8-4a6c-9a32-6a85534dd890-utilities\") pod \"74b45c64-c4c8-4a6c-9a32-6a85534dd890\" (UID: \"74b45c64-c4c8-4a6c-9a32-6a85534dd890\") " Feb 18 01:37:01 crc kubenswrapper[4858]: I0218 01:37:01.552664 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74b45c64-c4c8-4a6c-9a32-6a85534dd890-utilities" (OuterVolumeSpecName: "utilities") pod "74b45c64-c4c8-4a6c-9a32-6a85534dd890" (UID: "74b45c64-c4c8-4a6c-9a32-6a85534dd890"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:37:01 crc kubenswrapper[4858]: I0218 01:37:01.559373 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74b45c64-c4c8-4a6c-9a32-6a85534dd890-kube-api-access-lmczv" (OuterVolumeSpecName: "kube-api-access-lmczv") pod "74b45c64-c4c8-4a6c-9a32-6a85534dd890" (UID: "74b45c64-c4c8-4a6c-9a32-6a85534dd890"). InnerVolumeSpecName "kube-api-access-lmczv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:37:01 crc kubenswrapper[4858]: I0218 01:37:01.588082 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74b45c64-c4c8-4a6c-9a32-6a85534dd890-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74b45c64-c4c8-4a6c-9a32-6a85534dd890" (UID: "74b45c64-c4c8-4a6c-9a32-6a85534dd890"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:37:01 crc kubenswrapper[4858]: I0218 01:37:01.655017 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmczv\" (UniqueName: \"kubernetes.io/projected/74b45c64-c4c8-4a6c-9a32-6a85534dd890-kube-api-access-lmczv\") on node \"crc\" DevicePath \"\"" Feb 18 01:37:01 crc kubenswrapper[4858]: I0218 01:37:01.655287 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74b45c64-c4c8-4a6c-9a32-6a85534dd890-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:37:01 crc kubenswrapper[4858]: I0218 01:37:01.655300 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74b45c64-c4c8-4a6c-9a32-6a85534dd890-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:37:01 crc kubenswrapper[4858]: I0218 01:37:01.858169 4858 generic.go:334] "Generic (PLEG): container finished" podID="74b45c64-c4c8-4a6c-9a32-6a85534dd890" containerID="d757c3d3642727c5dd7c5d0e09001fed8d2e67b989c0b6557a94e4d2582095f2" exitCode=0 Feb 18 01:37:01 crc kubenswrapper[4858]: I0218 01:37:01.858245 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8bk29" Feb 18 01:37:01 crc kubenswrapper[4858]: I0218 01:37:01.858233 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bk29" event={"ID":"74b45c64-c4c8-4a6c-9a32-6a85534dd890","Type":"ContainerDied","Data":"d757c3d3642727c5dd7c5d0e09001fed8d2e67b989c0b6557a94e4d2582095f2"} Feb 18 01:37:01 crc kubenswrapper[4858]: I0218 01:37:01.858429 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bk29" event={"ID":"74b45c64-c4c8-4a6c-9a32-6a85534dd890","Type":"ContainerDied","Data":"c71577d29fd2629946cf04a765ffdbcc52858418fa825a2a60151f31d2427469"} Feb 18 01:37:01 crc kubenswrapper[4858]: I0218 01:37:01.858464 4858 scope.go:117] "RemoveContainer" containerID="d757c3d3642727c5dd7c5d0e09001fed8d2e67b989c0b6557a94e4d2582095f2" Feb 18 01:37:01 crc kubenswrapper[4858]: I0218 01:37:01.894824 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bk29"] Feb 18 01:37:01 crc kubenswrapper[4858]: I0218 01:37:01.905320 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bk29"] Feb 18 01:37:01 crc kubenswrapper[4858]: I0218 01:37:01.906791 4858 scope.go:117] "RemoveContainer" containerID="e76306583476a95c0a4dc1596d2bda7ab064d5d6cda4551fb1f4f230d0cdd654" Feb 18 01:37:01 crc kubenswrapper[4858]: I0218 01:37:01.953811 4858 scope.go:117] "RemoveContainer" containerID="f0d722c832a4091795049b1dbfe67e450fc9a7edd5a09d884b464e9f7a1b21b4" Feb 18 01:37:02 crc kubenswrapper[4858]: I0218 01:37:02.023536 4858 scope.go:117] "RemoveContainer" containerID="d757c3d3642727c5dd7c5d0e09001fed8d2e67b989c0b6557a94e4d2582095f2" Feb 18 01:37:02 crc kubenswrapper[4858]: E0218 01:37:02.024102 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d757c3d3642727c5dd7c5d0e09001fed8d2e67b989c0b6557a94e4d2582095f2\": container with ID starting with d757c3d3642727c5dd7c5d0e09001fed8d2e67b989c0b6557a94e4d2582095f2 not found: ID does not exist" containerID="d757c3d3642727c5dd7c5d0e09001fed8d2e67b989c0b6557a94e4d2582095f2" Feb 18 01:37:02 crc kubenswrapper[4858]: I0218 01:37:02.024171 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d757c3d3642727c5dd7c5d0e09001fed8d2e67b989c0b6557a94e4d2582095f2"} err="failed to get container status \"d757c3d3642727c5dd7c5d0e09001fed8d2e67b989c0b6557a94e4d2582095f2\": rpc error: code = NotFound desc = could not find container \"d757c3d3642727c5dd7c5d0e09001fed8d2e67b989c0b6557a94e4d2582095f2\": container with ID starting with d757c3d3642727c5dd7c5d0e09001fed8d2e67b989c0b6557a94e4d2582095f2 not found: ID does not exist" Feb 18 01:37:02 crc kubenswrapper[4858]: I0218 01:37:02.024200 4858 scope.go:117] "RemoveContainer" containerID="e76306583476a95c0a4dc1596d2bda7ab064d5d6cda4551fb1f4f230d0cdd654" Feb 18 01:37:02 crc kubenswrapper[4858]: E0218 01:37:02.027065 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e76306583476a95c0a4dc1596d2bda7ab064d5d6cda4551fb1f4f230d0cdd654\": container with ID starting with e76306583476a95c0a4dc1596d2bda7ab064d5d6cda4551fb1f4f230d0cdd654 not found: ID does not exist" containerID="e76306583476a95c0a4dc1596d2bda7ab064d5d6cda4551fb1f4f230d0cdd654" Feb 18 01:37:02 crc kubenswrapper[4858]: I0218 01:37:02.027106 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e76306583476a95c0a4dc1596d2bda7ab064d5d6cda4551fb1f4f230d0cdd654"} err="failed to get container status \"e76306583476a95c0a4dc1596d2bda7ab064d5d6cda4551fb1f4f230d0cdd654\": rpc error: code = NotFound desc = could not find container \"e76306583476a95c0a4dc1596d2bda7ab064d5d6cda4551fb1f4f230d0cdd654\": container with ID starting with e76306583476a95c0a4dc1596d2bda7ab064d5d6cda4551fb1f4f230d0cdd654 not found: ID does not exist" Feb 18 01:37:02 crc kubenswrapper[4858]: I0218 01:37:02.027132 4858 scope.go:117] "RemoveContainer" containerID="f0d722c832a4091795049b1dbfe67e450fc9a7edd5a09d884b464e9f7a1b21b4" Feb 18 01:37:02 crc kubenswrapper[4858]: E0218 01:37:02.027453 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0d722c832a4091795049b1dbfe67e450fc9a7edd5a09d884b464e9f7a1b21b4\": container with ID starting with f0d722c832a4091795049b1dbfe67e450fc9a7edd5a09d884b464e9f7a1b21b4 not found: ID does not exist" containerID="f0d722c832a4091795049b1dbfe67e450fc9a7edd5a09d884b464e9f7a1b21b4" Feb 18 01:37:02 crc kubenswrapper[4858]: I0218 01:37:02.027514 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0d722c832a4091795049b1dbfe67e450fc9a7edd5a09d884b464e9f7a1b21b4"} err="failed to get container status \"f0d722c832a4091795049b1dbfe67e450fc9a7edd5a09d884b464e9f7a1b21b4\": rpc error: code = NotFound desc = could not find container \"f0d722c832a4091795049b1dbfe67e450fc9a7edd5a09d884b464e9f7a1b21b4\": container with ID starting with f0d722c832a4091795049b1dbfe67e450fc9a7edd5a09d884b464e9f7a1b21b4 not found: ID does not exist" Feb 18 01:37:03 crc kubenswrapper[4858]: I0218 01:37:03.435580 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74b45c64-c4c8-4a6c-9a32-6a85534dd890" path="/var/lib/kubelet/pods/74b45c64-c4c8-4a6c-9a32-6a85534dd890/volumes" Feb 18 01:37:09 crc kubenswrapper[4858]: E0218 01:37:09.423260 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:37:14 crc kubenswrapper[4858]: I0218 01:37:14.419942 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:37:14 crc kubenswrapper[4858]: E0218 01:37:14.421140 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:37:15 crc kubenswrapper[4858]: E0218 01:37:15.422137 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:37:21 crc kubenswrapper[4858]: E0218 01:37:21.421374 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:37:27 crc kubenswrapper[4858]: I0218 01:37:27.432680 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:37:27 crc kubenswrapper[4858]: E0218 01:37:27.434068 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:37:28 crc kubenswrapper[4858]: E0218 01:37:28.421479 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:37:35 crc kubenswrapper[4858]: E0218 01:37:35.423168 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:37:38 crc kubenswrapper[4858]: I0218 01:37:38.419784 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:37:38 crc kubenswrapper[4858]: E0218 01:37:38.420929 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:37:42 crc kubenswrapper[4858]: E0218 01:37:42.421880 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:37:49 crc kubenswrapper[4858]: I0218 01:37:49.420124 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:37:49 crc kubenswrapper[4858]: E0218 01:37:49.421087 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:37:50 crc kubenswrapper[4858]: E0218 01:37:50.423139 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:37:50 crc kubenswrapper[4858]: I0218 01:37:50.516919 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k5bcm"] Feb 18 01:37:50 crc kubenswrapper[4858]: E0218 01:37:50.517443 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74b45c64-c4c8-4a6c-9a32-6a85534dd890" containerName="registry-server" Feb 18 01:37:50 crc kubenswrapper[4858]: I0218 01:37:50.517457 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="74b45c64-c4c8-4a6c-9a32-6a85534dd890" containerName="registry-server" Feb 18 01:37:50 crc kubenswrapper[4858]: E0218 01:37:50.517483 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74b45c64-c4c8-4a6c-9a32-6a85534dd890" containerName="extract-content" Feb 18 01:37:50 crc kubenswrapper[4858]: I0218 01:37:50.517507 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="74b45c64-c4c8-4a6c-9a32-6a85534dd890" containerName="extract-content" Feb 18 01:37:50 crc kubenswrapper[4858]: E0218 01:37:50.517538 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74b45c64-c4c8-4a6c-9a32-6a85534dd890" containerName="extract-utilities" Feb 18 01:37:50 crc kubenswrapper[4858]: I0218 01:37:50.517549 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="74b45c64-c4c8-4a6c-9a32-6a85534dd890" containerName="extract-utilities" Feb 18 01:37:50 crc kubenswrapper[4858]: I0218 01:37:50.517834 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="74b45c64-c4c8-4a6c-9a32-6a85534dd890" containerName="registry-server" Feb 18 01:37:50 crc kubenswrapper[4858]: I0218 01:37:50.519770 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k5bcm" Feb 18 01:37:50 crc kubenswrapper[4858]: I0218 01:37:50.535296 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k5bcm"] Feb 18 01:37:50 crc kubenswrapper[4858]: I0218 01:37:50.714383 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsrlj\" (UniqueName: \"kubernetes.io/projected/fd919910-0a06-488b-8621-a8ff1e09df78-kube-api-access-rsrlj\") pod \"redhat-operators-k5bcm\" (UID: \"fd919910-0a06-488b-8621-a8ff1e09df78\") " pod="openshift-marketplace/redhat-operators-k5bcm" Feb 18 01:37:50 crc kubenswrapper[4858]: I0218 01:37:50.714524 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd919910-0a06-488b-8621-a8ff1e09df78-utilities\") pod \"redhat-operators-k5bcm\" (UID: \"fd919910-0a06-488b-8621-a8ff1e09df78\") " pod="openshift-marketplace/redhat-operators-k5bcm" Feb 18 01:37:50 crc kubenswrapper[4858]: I0218 01:37:50.714550 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd919910-0a06-488b-8621-a8ff1e09df78-catalog-content\") pod \"redhat-operators-k5bcm\" (UID: \"fd919910-0a06-488b-8621-a8ff1e09df78\") " pod="openshift-marketplace/redhat-operators-k5bcm" Feb 18 01:37:50 crc kubenswrapper[4858]: I0218 01:37:50.816787 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd919910-0a06-488b-8621-a8ff1e09df78-utilities\") pod \"redhat-operators-k5bcm\" (UID: \"fd919910-0a06-488b-8621-a8ff1e09df78\") " pod="openshift-marketplace/redhat-operators-k5bcm" Feb 18 01:37:50 crc kubenswrapper[4858]: I0218 01:37:50.816841 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd919910-0a06-488b-8621-a8ff1e09df78-catalog-content\") pod \"redhat-operators-k5bcm\" (UID: \"fd919910-0a06-488b-8621-a8ff1e09df78\") " pod="openshift-marketplace/redhat-operators-k5bcm" Feb 18 01:37:50 crc kubenswrapper[4858]: I0218 01:37:50.816959 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsrlj\" (UniqueName: \"kubernetes.io/projected/fd919910-0a06-488b-8621-a8ff1e09df78-kube-api-access-rsrlj\") pod \"redhat-operators-k5bcm\" (UID: \"fd919910-0a06-488b-8621-a8ff1e09df78\") " pod="openshift-marketplace/redhat-operators-k5bcm" Feb 18 01:37:50 crc kubenswrapper[4858]: I0218 01:37:50.817570 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd919910-0a06-488b-8621-a8ff1e09df78-utilities\") pod \"redhat-operators-k5bcm\" (UID: \"fd919910-0a06-488b-8621-a8ff1e09df78\") " pod="openshift-marketplace/redhat-operators-k5bcm" Feb 18 01:37:50 crc kubenswrapper[4858]: I0218 01:37:50.817649 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd919910-0a06-488b-8621-a8ff1e09df78-catalog-content\") pod \"redhat-operators-k5bcm\" (UID: \"fd919910-0a06-488b-8621-a8ff1e09df78\") " pod="openshift-marketplace/redhat-operators-k5bcm" Feb 18 01:37:50 crc kubenswrapper[4858]: I0218 01:37:50.849622 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsrlj\" (UniqueName: \"kubernetes.io/projected/fd919910-0a06-488b-8621-a8ff1e09df78-kube-api-access-rsrlj\") pod \"redhat-operators-k5bcm\" (UID: \"fd919910-0a06-488b-8621-a8ff1e09df78\") " pod="openshift-marketplace/redhat-operators-k5bcm" Feb 18 01:37:50 crc kubenswrapper[4858]: I0218 01:37:50.861929 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k5bcm" Feb 18 01:37:51 crc kubenswrapper[4858]: I0218 01:37:51.451043 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k5bcm"] Feb 18 01:37:51 crc kubenswrapper[4858]: I0218 01:37:51.702549 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5bcm" event={"ID":"fd919910-0a06-488b-8621-a8ff1e09df78","Type":"ContainerStarted","Data":"b2624a1d1e83711d86b5c2cae6b0b857bf940add84a0ba99163716c9bcb41b79"} Feb 18 01:37:51 crc kubenswrapper[4858]: I0218 01:37:51.703012 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5bcm" event={"ID":"fd919910-0a06-488b-8621-a8ff1e09df78","Type":"ContainerStarted","Data":"c9e9fe29ef0dd1e367017aa8a72227c57d13e346be58db9746bce9569455094b"} Feb 18 01:37:52 crc kubenswrapper[4858]: I0218 01:37:52.713177 4858 generic.go:334] "Generic (PLEG): container finished" podID="fd919910-0a06-488b-8621-a8ff1e09df78" containerID="b2624a1d1e83711d86b5c2cae6b0b857bf940add84a0ba99163716c9bcb41b79" exitCode=0 Feb 18 01:37:52 crc kubenswrapper[4858]: I0218 01:37:52.713217 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5bcm" event={"ID":"fd919910-0a06-488b-8621-a8ff1e09df78","Type":"ContainerDied","Data":"b2624a1d1e83711d86b5c2cae6b0b857bf940add84a0ba99163716c9bcb41b79"} Feb 18 01:37:53 crc kubenswrapper[4858]: I0218 01:37:53.728475 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5bcm" event={"ID":"fd919910-0a06-488b-8621-a8ff1e09df78","Type":"ContainerStarted","Data":"bc73efd328fa8659c37e03047d5cf97872a03fa1f952444d5866b20a013d095c"} Feb 18 01:37:56 crc kubenswrapper[4858]: E0218 01:37:56.421835 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:37:57 crc kubenswrapper[4858]: I0218 01:37:57.779471 4858 generic.go:334] "Generic (PLEG): container finished" podID="fd919910-0a06-488b-8621-a8ff1e09df78" containerID="bc73efd328fa8659c37e03047d5cf97872a03fa1f952444d5866b20a013d095c" exitCode=0 Feb 18 01:37:57 crc kubenswrapper[4858]: I0218 01:37:57.779573 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5bcm" event={"ID":"fd919910-0a06-488b-8621-a8ff1e09df78","Type":"ContainerDied","Data":"bc73efd328fa8659c37e03047d5cf97872a03fa1f952444d5866b20a013d095c"} Feb 18 01:37:58 crc kubenswrapper[4858]: I0218 01:37:58.791605 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5bcm" event={"ID":"fd919910-0a06-488b-8621-a8ff1e09df78","Type":"ContainerStarted","Data":"3ba1ead9730e4d8d99991ae32ffed813f86e8c1a4e605246e3d4f5932a94e6f0"} Feb 18 01:37:58 crc kubenswrapper[4858]: I0218 01:37:58.818614 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k5bcm" podStartSLOduration=3.303239255 podStartE2EDuration="8.81859544s" podCreationTimestamp="2026-02-18 01:37:50 +0000 UTC" firstStartedPulling="2026-02-18 01:37:52.716361353 +0000 UTC m=+3826.022198085" lastFinishedPulling="2026-02-18 01:37:58.231717498 +0000 UTC m=+3831.537554270" observedRunningTime="2026-02-18 01:37:58.812072083 +0000 UTC m=+3832.117908825" watchObservedRunningTime="2026-02-18 01:37:58.81859544 +0000 UTC m=+3832.124432172" Feb 18 01:38:00 crc kubenswrapper[4858]: I0218 01:38:00.865199 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k5bcm" Feb 18 01:38:00 crc kubenswrapper[4858]: I0218 01:38:00.881222 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k5bcm" Feb 18 01:38:01 crc kubenswrapper[4858]: I0218 01:38:01.422577 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:38:01 crc kubenswrapper[4858]: E0218 01:38:01.423321 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:38:01 crc kubenswrapper[4858]: I0218 01:38:01.932927 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k5bcm" podUID="fd919910-0a06-488b-8621-a8ff1e09df78" containerName="registry-server" probeResult="failure" output=< Feb 18 01:38:01 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Feb 18 01:38:01 crc kubenswrapper[4858]: > Feb 18 01:38:05 crc kubenswrapper[4858]: E0218 01:38:05.422787 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:38:08 crc kubenswrapper[4858]: E0218 01:38:08.423141 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:38:10 crc kubenswrapper[4858]: I0218 01:38:10.937350 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k5bcm" Feb 18 01:38:11 crc kubenswrapper[4858]: I0218 01:38:11.013311 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k5bcm" Feb 18 01:38:11 crc kubenswrapper[4858]: I0218 01:38:11.182926 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k5bcm"] Feb 18 01:38:12 crc kubenswrapper[4858]: I0218 01:38:12.429860 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:38:12 crc kubenswrapper[4858]: E0218 01:38:12.430894 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:38:12 crc kubenswrapper[4858]: I0218 01:38:12.952348 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k5bcm" podUID="fd919910-0a06-488b-8621-a8ff1e09df78" containerName="registry-server" containerID="cri-o://3ba1ead9730e4d8d99991ae32ffed813f86e8c1a4e605246e3d4f5932a94e6f0" gracePeriod=2 Feb 18 01:38:13 crc kubenswrapper[4858]: I0218 01:38:13.629378 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k5bcm" Feb 18 01:38:13 crc kubenswrapper[4858]: I0218 01:38:13.723435 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsrlj\" (UniqueName: \"kubernetes.io/projected/fd919910-0a06-488b-8621-a8ff1e09df78-kube-api-access-rsrlj\") pod \"fd919910-0a06-488b-8621-a8ff1e09df78\" (UID: \"fd919910-0a06-488b-8621-a8ff1e09df78\") " Feb 18 01:38:13 crc kubenswrapper[4858]: I0218 01:38:13.723956 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd919910-0a06-488b-8621-a8ff1e09df78-utilities\") pod \"fd919910-0a06-488b-8621-a8ff1e09df78\" (UID: \"fd919910-0a06-488b-8621-a8ff1e09df78\") " Feb 18 01:38:13 crc kubenswrapper[4858]: I0218 01:38:13.724739 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd919910-0a06-488b-8621-a8ff1e09df78-catalog-content\") pod \"fd919910-0a06-488b-8621-a8ff1e09df78\" (UID: \"fd919910-0a06-488b-8621-a8ff1e09df78\") " Feb 18 01:38:13 crc kubenswrapper[4858]: I0218 01:38:13.724932 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd919910-0a06-488b-8621-a8ff1e09df78-utilities" (OuterVolumeSpecName: "utilities") pod "fd919910-0a06-488b-8621-a8ff1e09df78" (UID: "fd919910-0a06-488b-8621-a8ff1e09df78"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:38:13 crc kubenswrapper[4858]: I0218 01:38:13.725941 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd919910-0a06-488b-8621-a8ff1e09df78-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:38:13 crc kubenswrapper[4858]: I0218 01:38:13.736766 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd919910-0a06-488b-8621-a8ff1e09df78-kube-api-access-rsrlj" (OuterVolumeSpecName: "kube-api-access-rsrlj") pod "fd919910-0a06-488b-8621-a8ff1e09df78" (UID: "fd919910-0a06-488b-8621-a8ff1e09df78"). InnerVolumeSpecName "kube-api-access-rsrlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:38:13 crc kubenswrapper[4858]: I0218 01:38:13.828846 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsrlj\" (UniqueName: \"kubernetes.io/projected/fd919910-0a06-488b-8621-a8ff1e09df78-kube-api-access-rsrlj\") on node \"crc\" DevicePath \"\"" Feb 18 01:38:13 crc kubenswrapper[4858]: I0218 01:38:13.928194 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd919910-0a06-488b-8621-a8ff1e09df78-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd919910-0a06-488b-8621-a8ff1e09df78" (UID: "fd919910-0a06-488b-8621-a8ff1e09df78"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:38:13 crc kubenswrapper[4858]: I0218 01:38:13.930450 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd919910-0a06-488b-8621-a8ff1e09df78-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:38:13 crc kubenswrapper[4858]: I0218 01:38:13.962735 4858 generic.go:334] "Generic (PLEG): container finished" podID="fd919910-0a06-488b-8621-a8ff1e09df78" containerID="3ba1ead9730e4d8d99991ae32ffed813f86e8c1a4e605246e3d4f5932a94e6f0" exitCode=0 Feb 18 01:38:13 crc kubenswrapper[4858]: I0218 01:38:13.962772 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5bcm" event={"ID":"fd919910-0a06-488b-8621-a8ff1e09df78","Type":"ContainerDied","Data":"3ba1ead9730e4d8d99991ae32ffed813f86e8c1a4e605246e3d4f5932a94e6f0"} Feb 18 01:38:13 crc kubenswrapper[4858]: I0218 01:38:13.962811 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5bcm" event={"ID":"fd919910-0a06-488b-8621-a8ff1e09df78","Type":"ContainerDied","Data":"c9e9fe29ef0dd1e367017aa8a72227c57d13e346be58db9746bce9569455094b"} Feb 18 01:38:13 crc kubenswrapper[4858]: I0218 01:38:13.962829 4858 scope.go:117] "RemoveContainer" containerID="3ba1ead9730e4d8d99991ae32ffed813f86e8c1a4e605246e3d4f5932a94e6f0" Feb 18 01:38:13 crc kubenswrapper[4858]: I0218 01:38:13.963129 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k5bcm" Feb 18 01:38:14 crc kubenswrapper[4858]: I0218 01:38:14.009061 4858 scope.go:117] "RemoveContainer" containerID="bc73efd328fa8659c37e03047d5cf97872a03fa1f952444d5866b20a013d095c" Feb 18 01:38:14 crc kubenswrapper[4858]: I0218 01:38:14.009725 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k5bcm"] Feb 18 01:38:14 crc kubenswrapper[4858]: I0218 01:38:14.020562 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k5bcm"] Feb 18 01:38:14 crc kubenswrapper[4858]: I0218 01:38:14.034941 4858 scope.go:117] "RemoveContainer" containerID="b2624a1d1e83711d86b5c2cae6b0b857bf940add84a0ba99163716c9bcb41b79" Feb 18 01:38:14 crc kubenswrapper[4858]: I0218 01:38:14.100743 4858 scope.go:117] "RemoveContainer" containerID="3ba1ead9730e4d8d99991ae32ffed813f86e8c1a4e605246e3d4f5932a94e6f0" Feb 18 01:38:14 crc kubenswrapper[4858]: E0218 01:38:14.102586 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ba1ead9730e4d8d99991ae32ffed813f86e8c1a4e605246e3d4f5932a94e6f0\": container with ID starting with 3ba1ead9730e4d8d99991ae32ffed813f86e8c1a4e605246e3d4f5932a94e6f0 not found: ID does not exist" containerID="3ba1ead9730e4d8d99991ae32ffed813f86e8c1a4e605246e3d4f5932a94e6f0" Feb 18 01:38:14 crc kubenswrapper[4858]: I0218 01:38:14.102648 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ba1ead9730e4d8d99991ae32ffed813f86e8c1a4e605246e3d4f5932a94e6f0"} err="failed to get container status \"3ba1ead9730e4d8d99991ae32ffed813f86e8c1a4e605246e3d4f5932a94e6f0\": rpc error: code = NotFound desc = could not find container \"3ba1ead9730e4d8d99991ae32ffed813f86e8c1a4e605246e3d4f5932a94e6f0\": container with ID starting with 3ba1ead9730e4d8d99991ae32ffed813f86e8c1a4e605246e3d4f5932a94e6f0 not found: ID does not exist" Feb 18 01:38:14 crc kubenswrapper[4858]: I0218 01:38:14.102703 4858 scope.go:117] "RemoveContainer" containerID="bc73efd328fa8659c37e03047d5cf97872a03fa1f952444d5866b20a013d095c" Feb 18 01:38:14 crc kubenswrapper[4858]: E0218 01:38:14.103012 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc73efd328fa8659c37e03047d5cf97872a03fa1f952444d5866b20a013d095c\": container with ID starting with bc73efd328fa8659c37e03047d5cf97872a03fa1f952444d5866b20a013d095c not found: ID does not exist" containerID="bc73efd328fa8659c37e03047d5cf97872a03fa1f952444d5866b20a013d095c" Feb 18 01:38:14 crc kubenswrapper[4858]: I0218 01:38:14.103039 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc73efd328fa8659c37e03047d5cf97872a03fa1f952444d5866b20a013d095c"} err="failed to get container status \"bc73efd328fa8659c37e03047d5cf97872a03fa1f952444d5866b20a013d095c\": rpc error: code = NotFound desc = could not find container \"bc73efd328fa8659c37e03047d5cf97872a03fa1f952444d5866b20a013d095c\": container with ID starting with bc73efd328fa8659c37e03047d5cf97872a03fa1f952444d5866b20a013d095c not found: ID does not exist" Feb 18 01:38:14 crc kubenswrapper[4858]: I0218 01:38:14.103059 4858 scope.go:117] "RemoveContainer" containerID="b2624a1d1e83711d86b5c2cae6b0b857bf940add84a0ba99163716c9bcb41b79" Feb 18 01:38:14 crc kubenswrapper[4858]: E0218 01:38:14.103352 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2624a1d1e83711d86b5c2cae6b0b857bf940add84a0ba99163716c9bcb41b79\": container with ID starting with b2624a1d1e83711d86b5c2cae6b0b857bf940add84a0ba99163716c9bcb41b79 not found: ID does not exist" containerID="b2624a1d1e83711d86b5c2cae6b0b857bf940add84a0ba99163716c9bcb41b79" Feb 18 01:38:14 crc kubenswrapper[4858]: I0218 01:38:14.103394 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2624a1d1e83711d86b5c2cae6b0b857bf940add84a0ba99163716c9bcb41b79"} err="failed to get container status \"b2624a1d1e83711d86b5c2cae6b0b857bf940add84a0ba99163716c9bcb41b79\": rpc error: code = NotFound desc = could not find container \"b2624a1d1e83711d86b5c2cae6b0b857bf940add84a0ba99163716c9bcb41b79\": container with ID starting with b2624a1d1e83711d86b5c2cae6b0b857bf940add84a0ba99163716c9bcb41b79 not found: ID does not exist" Feb 18 01:38:15 crc kubenswrapper[4858]: I0218 01:38:15.439229 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd919910-0a06-488b-8621-a8ff1e09df78" path="/var/lib/kubelet/pods/fd919910-0a06-488b-8621-a8ff1e09df78/volumes" Feb 18 01:38:19 crc kubenswrapper[4858]: E0218 01:38:19.425244 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:38:20 crc kubenswrapper[4858]: E0218 01:38:20.422552 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:38:23 crc kubenswrapper[4858]: I0218 01:38:23.420023 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:38:23 crc kubenswrapper[4858]: E0218 01:38:23.420907 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:38:31 crc kubenswrapper[4858]: E0218 01:38:31.424789 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:38:33 crc kubenswrapper[4858]: I0218 01:38:33.421766 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 01:38:33 crc kubenswrapper[4858]: E0218 01:38:33.537634 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 01:38:33 crc kubenswrapper[4858]: E0218 01:38:33.537685 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 01:38:33 crc kubenswrapper[4858]: E0218 01:38:33.537825 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t22t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-h2mps_openstack(8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:38:33 crc kubenswrapper[4858]: E0218 01:38:33.539067 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:38:36 crc kubenswrapper[4858]: I0218 01:38:36.420161 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:38:36 crc kubenswrapper[4858]: E0218 01:38:36.422213 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:38:40 crc kubenswrapper[4858]: I0218 01:38:40.047385 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4"] Feb 18 01:38:40 crc kubenswrapper[4858]: E0218 01:38:40.048877 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd919910-0a06-488b-8621-a8ff1e09df78" containerName="registry-server" Feb 18 01:38:40 crc kubenswrapper[4858]: I0218 01:38:40.048903 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd919910-0a06-488b-8621-a8ff1e09df78" containerName="registry-server" Feb 18 01:38:40 crc kubenswrapper[4858]: E0218 01:38:40.048942 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd919910-0a06-488b-8621-a8ff1e09df78" containerName="extract-content" Feb 18 01:38:40 crc kubenswrapper[4858]: I0218 01:38:40.048957 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd919910-0a06-488b-8621-a8ff1e09df78" containerName="extract-content" Feb 18 01:38:40 crc kubenswrapper[4858]: E0218 01:38:40.049007 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd919910-0a06-488b-8621-a8ff1e09df78" containerName="extract-utilities" Feb 18 01:38:40 crc kubenswrapper[4858]: I0218 01:38:40.049021 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd919910-0a06-488b-8621-a8ff1e09df78" containerName="extract-utilities" Feb 18 01:38:40 crc kubenswrapper[4858]: I0218 01:38:40.049365 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd919910-0a06-488b-8621-a8ff1e09df78" containerName="registry-server" Feb 18 01:38:40 crc kubenswrapper[4858]: I0218 01:38:40.050676 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4" Feb 18 01:38:40 crc kubenswrapper[4858]: I0218 01:38:40.053197 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 01:38:40 crc kubenswrapper[4858]: I0218 01:38:40.053215 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ts27" Feb 18 01:38:40 crc kubenswrapper[4858]: I0218 01:38:40.054040 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 01:38:40 crc kubenswrapper[4858]: I0218 01:38:40.054726 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 01:38:40 crc kubenswrapper[4858]: I0218 01:38:40.065237 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4"] Feb 18 01:38:40 crc kubenswrapper[4858]: I0218 01:38:40.203968 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65cd5b4f-e1ce-401d-b2e7-9c622282c342-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4\" (UID: \"65cd5b4f-e1ce-401d-b2e7-9c622282c342\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4" Feb 18 01:38:40 crc kubenswrapper[4858]: I0218 01:38:40.204655 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65cd5b4f-e1ce-401d-b2e7-9c622282c342-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4\" (UID: \"65cd5b4f-e1ce-401d-b2e7-9c622282c342\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4" Feb 18 01:38:40 crc kubenswrapper[4858]: I0218 01:38:40.204743 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htlnf\" (UniqueName: \"kubernetes.io/projected/65cd5b4f-e1ce-401d-b2e7-9c622282c342-kube-api-access-htlnf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4\" (UID: \"65cd5b4f-e1ce-401d-b2e7-9c622282c342\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4" Feb 18 01:38:40 crc kubenswrapper[4858]: I0218 01:38:40.306867 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65cd5b4f-e1ce-401d-b2e7-9c622282c342-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4\" (UID: \"65cd5b4f-e1ce-401d-b2e7-9c622282c342\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4" Feb 18 01:38:40 crc kubenswrapper[4858]: I0218 01:38:40.307096 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65cd5b4f-e1ce-401d-b2e7-9c622282c342-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4\" (UID: \"65cd5b4f-e1ce-401d-b2e7-9c622282c342\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4" Feb 18 01:38:40 crc kubenswrapper[4858]: I0218 01:38:40.307151 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htlnf\" (UniqueName: \"kubernetes.io/projected/65cd5b4f-e1ce-401d-b2e7-9c622282c342-kube-api-access-htlnf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4\" (UID: \"65cd5b4f-e1ce-401d-b2e7-9c622282c342\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4" Feb 18 01:38:40 crc kubenswrapper[4858]: I0218 01:38:40.315119 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65cd5b4f-e1ce-401d-b2e7-9c622282c342-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4\" (UID: \"65cd5b4f-e1ce-401d-b2e7-9c622282c342\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4" Feb 18 01:38:40 crc kubenswrapper[4858]: I0218 01:38:40.315315 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65cd5b4f-e1ce-401d-b2e7-9c622282c342-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4\" (UID: \"65cd5b4f-e1ce-401d-b2e7-9c622282c342\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4" Feb 18 01:38:40 crc kubenswrapper[4858]: I0218 01:38:40.326245 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htlnf\" (UniqueName: \"kubernetes.io/projected/65cd5b4f-e1ce-401d-b2e7-9c622282c342-kube-api-access-htlnf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4\" (UID: \"65cd5b4f-e1ce-401d-b2e7-9c622282c342\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4" Feb 18 01:38:40 crc kubenswrapper[4858]: I0218 01:38:40.392739 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4" Feb 18 01:38:40 crc kubenswrapper[4858]: W0218 01:38:40.997593 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65cd5b4f_e1ce_401d_b2e7_9c622282c342.slice/crio-2f4e6152f7b148894fc10cf0e31b69ae93adfc55c6d01baa3cef87d428a3b2e3 WatchSource:0}: Error finding container 2f4e6152f7b148894fc10cf0e31b69ae93adfc55c6d01baa3cef87d428a3b2e3: Status 404 returned error can't find the container with id 2f4e6152f7b148894fc10cf0e31b69ae93adfc55c6d01baa3cef87d428a3b2e3 Feb 18 01:38:41 crc kubenswrapper[4858]: I0218 01:38:41.005083 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4"] Feb 18 01:38:41 crc kubenswrapper[4858]: I0218 01:38:41.328273 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4" event={"ID":"65cd5b4f-e1ce-401d-b2e7-9c622282c342","Type":"ContainerStarted","Data":"2f4e6152f7b148894fc10cf0e31b69ae93adfc55c6d01baa3cef87d428a3b2e3"} Feb 18 01:38:42 crc kubenswrapper[4858]: I0218 01:38:42.343278 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4" event={"ID":"65cd5b4f-e1ce-401d-b2e7-9c622282c342","Type":"ContainerStarted","Data":"9ef348ef56eb0050776c36909b64976169fbd0688bdbe5b6099aa0c44c484296"} Feb 18 01:38:42 crc kubenswrapper[4858]: I0218 01:38:42.360391 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4" podStartSLOduration=1.88243174 podStartE2EDuration="2.360363856s" podCreationTimestamp="2026-02-18 01:38:40 +0000 UTC" firstStartedPulling="2026-02-18 01:38:41.000809007 +0000 UTC m=+3874.306645749" lastFinishedPulling="2026-02-18 01:38:41.478741133 +0000 UTC m=+3874.784577865" observedRunningTime="2026-02-18 01:38:42.359130326 +0000 UTC m=+3875.664967138" watchObservedRunningTime="2026-02-18 01:38:42.360363856 +0000 UTC m=+3875.666200628" Feb 18 01:38:42 crc kubenswrapper[4858]: E0218 01:38:42.516364 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:38:42 crc kubenswrapper[4858]: E0218 01:38:42.516427 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:38:42 crc kubenswrapper[4858]: E0218 01:38:42.516587 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n75h65dh56ch8chbh597h687h696h57bhdchcbhb6h9h648hb6h686h666h57ch557h55ch68ch76h686h5f7hb7hc6h68hdh67fh8bhd7hbcq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6qb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(1b28954c-8d35-4f43-a44b-307a56f6fff5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:38:42 crc kubenswrapper[4858]: E0218 01:38:42.517952 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:38:45 crc kubenswrapper[4858]: E0218 01:38:45.422096 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:38:49 crc kubenswrapper[4858]: I0218 01:38:49.421558 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:38:49 crc kubenswrapper[4858]: E0218 01:38:49.422352 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:38:55 crc kubenswrapper[4858]: E0218 01:38:55.424822 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:38:58 crc kubenswrapper[4858]: E0218 01:38:58.421679 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:39:00 crc kubenswrapper[4858]: I0218 01:39:00.419613 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:39:00 crc kubenswrapper[4858]: E0218 01:39:00.420231 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:39:10 crc kubenswrapper[4858]: E0218 01:39:10.422427 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:39:11 crc kubenswrapper[4858]: I0218 01:39:11.421067 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:39:11 crc kubenswrapper[4858]: E0218 01:39:11.421331 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:39:11 crc kubenswrapper[4858]: E0218 01:39:11.422354 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:39:22 crc kubenswrapper[4858]: I0218 01:39:22.419994 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:39:22 crc kubenswrapper[4858]: E0218 01:39:22.421988 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:39:24 crc kubenswrapper[4858]: E0218 01:39:24.424573 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:39:25 crc kubenswrapper[4858]: E0218 01:39:25.421615 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:39:37 crc kubenswrapper[4858]: I0218 01:39:37.430710 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:39:37 crc kubenswrapper[4858]: E0218 01:39:37.431754 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:39:37 crc kubenswrapper[4858]: I0218 01:39:37.992600 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerStarted","Data":"068364f64e37a652fbb7f5b1e3f6cc4f127ff5583f5759450d7d6ed6d0ee50a9"} Feb 18 01:39:40 crc kubenswrapper[4858]: E0218 01:39:40.425574 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:39:49 crc kubenswrapper[4858]: E0218 01:39:49.421535 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:39:52 crc kubenswrapper[4858]: E0218 01:39:52.421534 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:40:01 crc kubenswrapper[4858]: E0218 01:40:01.422312 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:40:06 crc kubenswrapper[4858]: E0218 01:40:06.422914 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:40:14 crc kubenswrapper[4858]: E0218 01:40:14.424820 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:40:16 crc kubenswrapper[4858]: I0218 01:40:16.163728 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7gzqm"] Feb 18 01:40:16 crc kubenswrapper[4858]: I0218 01:40:16.166907 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gzqm" Feb 18 01:40:16 crc kubenswrapper[4858]: I0218 01:40:16.196890 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7gzqm"] Feb 18 01:40:16 crc kubenswrapper[4858]: I0218 01:40:16.202857 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd87l\" (UniqueName: \"kubernetes.io/projected/6ae26f23-9145-485a-8d82-66450c8a8254-kube-api-access-hd87l\") pod \"certified-operators-7gzqm\" (UID: \"6ae26f23-9145-485a-8d82-66450c8a8254\") " pod="openshift-marketplace/certified-operators-7gzqm" Feb 18 01:40:16 crc kubenswrapper[4858]: I0218 01:40:16.202961 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ae26f23-9145-485a-8d82-66450c8a8254-utilities\") pod \"certified-operators-7gzqm\" (UID: \"6ae26f23-9145-485a-8d82-66450c8a8254\") " pod="openshift-marketplace/certified-operators-7gzqm" Feb 18 01:40:16 crc kubenswrapper[4858]: I0218 01:40:16.203001 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ae26f23-9145-485a-8d82-66450c8a8254-catalog-content\") pod \"certified-operators-7gzqm\" (UID: \"6ae26f23-9145-485a-8d82-66450c8a8254\") " pod="openshift-marketplace/certified-operators-7gzqm" Feb 18 01:40:16 crc kubenswrapper[4858]: I0218 01:40:16.305665 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd87l\" (UniqueName: \"kubernetes.io/projected/6ae26f23-9145-485a-8d82-66450c8a8254-kube-api-access-hd87l\") pod \"certified-operators-7gzqm\" (UID: \"6ae26f23-9145-485a-8d82-66450c8a8254\") " pod="openshift-marketplace/certified-operators-7gzqm" Feb 18 01:40:16 crc kubenswrapper[4858]: I0218 01:40:16.305799 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ae26f23-9145-485a-8d82-66450c8a8254-utilities\") pod \"certified-operators-7gzqm\" (UID: \"6ae26f23-9145-485a-8d82-66450c8a8254\") " pod="openshift-marketplace/certified-operators-7gzqm" Feb 18 01:40:16 crc kubenswrapper[4858]: I0218 01:40:16.305852 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ae26f23-9145-485a-8d82-66450c8a8254-catalog-content\") pod \"certified-operators-7gzqm\" (UID: \"6ae26f23-9145-485a-8d82-66450c8a8254\") " pod="openshift-marketplace/certified-operators-7gzqm" Feb 18 01:40:16 crc kubenswrapper[4858]: I0218 01:40:16.306740 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ae26f23-9145-485a-8d82-66450c8a8254-catalog-content\") pod \"certified-operators-7gzqm\" (UID: \"6ae26f23-9145-485a-8d82-66450c8a8254\") " pod="openshift-marketplace/certified-operators-7gzqm" Feb 18 01:40:16 crc kubenswrapper[4858]: I0218 01:40:16.307139 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ae26f23-9145-485a-8d82-66450c8a8254-utilities\") pod \"certified-operators-7gzqm\" (UID: \"6ae26f23-9145-485a-8d82-66450c8a8254\") " pod="openshift-marketplace/certified-operators-7gzqm" Feb 18 01:40:16 crc kubenswrapper[4858]: I0218 01:40:16.330239 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd87l\" (UniqueName: \"kubernetes.io/projected/6ae26f23-9145-485a-8d82-66450c8a8254-kube-api-access-hd87l\") pod \"certified-operators-7gzqm\" (UID: \"6ae26f23-9145-485a-8d82-66450c8a8254\") " pod="openshift-marketplace/certified-operators-7gzqm" Feb 18 01:40:16 crc kubenswrapper[4858]: I0218 01:40:16.493254 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gzqm" Feb 18 01:40:17 crc kubenswrapper[4858]: I0218 01:40:17.107828 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7gzqm"] Feb 18 01:40:17 crc kubenswrapper[4858]: E0218 01:40:17.429207 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:40:17 crc kubenswrapper[4858]: I0218 01:40:17.436639 4858 generic.go:334] "Generic (PLEG): container finished" podID="6ae26f23-9145-485a-8d82-66450c8a8254" containerID="987361d2c158e36011fa4f76fbe0f86b3d38361e15c130e1a3f482c2c5bbb91b" exitCode=0 Feb 18 01:40:17 crc kubenswrapper[4858]: I0218 01:40:17.436684 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gzqm" event={"ID":"6ae26f23-9145-485a-8d82-66450c8a8254","Type":"ContainerDied","Data":"987361d2c158e36011fa4f76fbe0f86b3d38361e15c130e1a3f482c2c5bbb91b"} Feb 18 01:40:17 crc kubenswrapper[4858]: I0218 01:40:17.436730 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gzqm" event={"ID":"6ae26f23-9145-485a-8d82-66450c8a8254","Type":"ContainerStarted","Data":"d9b86fe6a566b6854a94517fe6913954d8792664162d4dffc6feb15410e75986"} Feb 18 01:40:18 crc kubenswrapper[4858]: I0218 01:40:18.449530 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gzqm" event={"ID":"6ae26f23-9145-485a-8d82-66450c8a8254","Type":"ContainerStarted","Data":"e44913d77dea4c784942d241be008d31ae6928fdda888d591ae8eaa387359709"} Feb 18 01:40:19 crc kubenswrapper[4858]: I0218 01:40:19.460408 4858 generic.go:334] "Generic (PLEG): container finished" podID="6ae26f23-9145-485a-8d82-66450c8a8254" containerID="e44913d77dea4c784942d241be008d31ae6928fdda888d591ae8eaa387359709" exitCode=0 Feb 18 01:40:19 crc kubenswrapper[4858]: I0218 01:40:19.460625 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gzqm" event={"ID":"6ae26f23-9145-485a-8d82-66450c8a8254","Type":"ContainerDied","Data":"e44913d77dea4c784942d241be008d31ae6928fdda888d591ae8eaa387359709"} Feb 18 01:40:20 crc kubenswrapper[4858]: I0218 01:40:20.471049 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gzqm" event={"ID":"6ae26f23-9145-485a-8d82-66450c8a8254","Type":"ContainerStarted","Data":"932a6b4091904bbf9374ab724cb63f45340750ab5272ed303956bf54e7edb023"} Feb 18 01:40:20 crc kubenswrapper[4858]: I0218 01:40:20.511125 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7gzqm" podStartSLOduration=2.083919442 podStartE2EDuration="4.511102091s" podCreationTimestamp="2026-02-18 01:40:16 +0000 UTC" firstStartedPulling="2026-02-18 01:40:17.438174846 +0000 UTC m=+3970.744011578" lastFinishedPulling="2026-02-18 01:40:19.865357495 +0000 UTC m=+3973.171194227" observedRunningTime="2026-02-18 01:40:20.500536422 +0000 UTC m=+3973.806373174" watchObservedRunningTime="2026-02-18 01:40:20.511102091 +0000 UTC m=+3973.816938843" Feb 18 01:40:26 crc kubenswrapper[4858]: I0218 01:40:26.493712 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7gzqm" Feb 18 01:40:26 crc kubenswrapper[4858]: I0218 01:40:26.494835 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7gzqm" Feb 18 01:40:26 crc kubenswrapper[4858]: I0218 01:40:26.562237 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7gzqm" Feb 18 01:40:26 crc kubenswrapper[4858]: I0218 01:40:26.632270 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7gzqm" Feb 18 01:40:26 crc kubenswrapper[4858]: I0218 01:40:26.804016 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7gzqm"] Feb 18 01:40:28 crc kubenswrapper[4858]: E0218 01:40:28.423584 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:40:28 crc kubenswrapper[4858]: I0218 01:40:28.549679 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7gzqm" podUID="6ae26f23-9145-485a-8d82-66450c8a8254" containerName="registry-server" containerID="cri-o://932a6b4091904bbf9374ab724cb63f45340750ab5272ed303956bf54e7edb023" gracePeriod=2 Feb 18 01:40:29 crc kubenswrapper[4858]: I0218 01:40:29.333880 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gzqm" Feb 18 01:40:29 crc kubenswrapper[4858]: I0218 01:40:29.505977 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ae26f23-9145-485a-8d82-66450c8a8254-utilities\") pod \"6ae26f23-9145-485a-8d82-66450c8a8254\" (UID: \"6ae26f23-9145-485a-8d82-66450c8a8254\") " Feb 18 01:40:29 crc kubenswrapper[4858]: I0218 01:40:29.506147 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ae26f23-9145-485a-8d82-66450c8a8254-catalog-content\") pod \"6ae26f23-9145-485a-8d82-66450c8a8254\" (UID: \"6ae26f23-9145-485a-8d82-66450c8a8254\") " Feb 18 01:40:29 crc kubenswrapper[4858]: I0218 01:40:29.506297 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd87l\" (UniqueName: \"kubernetes.io/projected/6ae26f23-9145-485a-8d82-66450c8a8254-kube-api-access-hd87l\") pod \"6ae26f23-9145-485a-8d82-66450c8a8254\" (UID: \"6ae26f23-9145-485a-8d82-66450c8a8254\") " Feb 18 01:40:29 crc kubenswrapper[4858]: I0218 01:40:29.506972 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ae26f23-9145-485a-8d82-66450c8a8254-utilities" (OuterVolumeSpecName: "utilities") pod "6ae26f23-9145-485a-8d82-66450c8a8254" (UID: "6ae26f23-9145-485a-8d82-66450c8a8254"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:40:29 crc kubenswrapper[4858]: I0218 01:40:29.513029 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ae26f23-9145-485a-8d82-66450c8a8254-kube-api-access-hd87l" (OuterVolumeSpecName: "kube-api-access-hd87l") pod "6ae26f23-9145-485a-8d82-66450c8a8254" (UID: "6ae26f23-9145-485a-8d82-66450c8a8254"). InnerVolumeSpecName "kube-api-access-hd87l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:40:29 crc kubenswrapper[4858]: I0218 01:40:29.559828 4858 generic.go:334] "Generic (PLEG): container finished" podID="6ae26f23-9145-485a-8d82-66450c8a8254" containerID="932a6b4091904bbf9374ab724cb63f45340750ab5272ed303956bf54e7edb023" exitCode=0 Feb 18 01:40:29 crc kubenswrapper[4858]: I0218 01:40:29.559878 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gzqm" event={"ID":"6ae26f23-9145-485a-8d82-66450c8a8254","Type":"ContainerDied","Data":"932a6b4091904bbf9374ab724cb63f45340750ab5272ed303956bf54e7edb023"} Feb 18 01:40:29 crc kubenswrapper[4858]: I0218 01:40:29.559908 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gzqm" Feb 18 01:40:29 crc kubenswrapper[4858]: I0218 01:40:29.559927 4858 scope.go:117] "RemoveContainer" containerID="932a6b4091904bbf9374ab724cb63f45340750ab5272ed303956bf54e7edb023" Feb 18 01:40:29 crc kubenswrapper[4858]: I0218 01:40:29.559915 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gzqm" event={"ID":"6ae26f23-9145-485a-8d82-66450c8a8254","Type":"ContainerDied","Data":"d9b86fe6a566b6854a94517fe6913954d8792664162d4dffc6feb15410e75986"} Feb 18 01:40:29 crc kubenswrapper[4858]: I0218 01:40:29.571185 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ae26f23-9145-485a-8d82-66450c8a8254-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ae26f23-9145-485a-8d82-66450c8a8254" (UID: "6ae26f23-9145-485a-8d82-66450c8a8254"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:40:29 crc kubenswrapper[4858]: I0218 01:40:29.592359 4858 scope.go:117] "RemoveContainer" containerID="e44913d77dea4c784942d241be008d31ae6928fdda888d591ae8eaa387359709" Feb 18 01:40:29 crc kubenswrapper[4858]: I0218 01:40:29.609590 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ae26f23-9145-485a-8d82-66450c8a8254-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:40:29 crc kubenswrapper[4858]: I0218 01:40:29.609620 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ae26f23-9145-485a-8d82-66450c8a8254-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:40:29 crc kubenswrapper[4858]: I0218 01:40:29.609630 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd87l\" (UniqueName: \"kubernetes.io/projected/6ae26f23-9145-485a-8d82-66450c8a8254-kube-api-access-hd87l\") on node \"crc\" DevicePath \"\"" Feb 18 01:40:29 crc kubenswrapper[4858]: I0218 01:40:29.616900 4858 scope.go:117] "RemoveContainer" containerID="987361d2c158e36011fa4f76fbe0f86b3d38361e15c130e1a3f482c2c5bbb91b" Feb 18 01:40:29 crc kubenswrapper[4858]: I0218 01:40:29.669917 4858 scope.go:117] "RemoveContainer" containerID="932a6b4091904bbf9374ab724cb63f45340750ab5272ed303956bf54e7edb023" Feb 18 01:40:29 crc kubenswrapper[4858]: E0218 01:40:29.671366 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"932a6b4091904bbf9374ab724cb63f45340750ab5272ed303956bf54e7edb023\": container with ID starting with 932a6b4091904bbf9374ab724cb63f45340750ab5272ed303956bf54e7edb023 not found: ID does not exist" containerID="932a6b4091904bbf9374ab724cb63f45340750ab5272ed303956bf54e7edb023" Feb 18 01:40:29 crc kubenswrapper[4858]: I0218 01:40:29.671528 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"932a6b4091904bbf9374ab724cb63f45340750ab5272ed303956bf54e7edb023"} err="failed to get container status \"932a6b4091904bbf9374ab724cb63f45340750ab5272ed303956bf54e7edb023\": rpc error: code = NotFound desc = could not find container \"932a6b4091904bbf9374ab724cb63f45340750ab5272ed303956bf54e7edb023\": container with ID starting with 932a6b4091904bbf9374ab724cb63f45340750ab5272ed303956bf54e7edb023 not found: ID does not exist" Feb 18 01:40:29 crc kubenswrapper[4858]: I0218 01:40:29.671652 4858 scope.go:117] "RemoveContainer" containerID="e44913d77dea4c784942d241be008d31ae6928fdda888d591ae8eaa387359709" Feb 18 01:40:29 crc kubenswrapper[4858]: E0218 01:40:29.672112 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e44913d77dea4c784942d241be008d31ae6928fdda888d591ae8eaa387359709\": container with ID starting with e44913d77dea4c784942d241be008d31ae6928fdda888d591ae8eaa387359709 not found: ID does not exist" containerID="e44913d77dea4c784942d241be008d31ae6928fdda888d591ae8eaa387359709" Feb 18 01:40:29 crc kubenswrapper[4858]: I0218 01:40:29.672150 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e44913d77dea4c784942d241be008d31ae6928fdda888d591ae8eaa387359709"} err="failed to get container status \"e44913d77dea4c784942d241be008d31ae6928fdda888d591ae8eaa387359709\": rpc error: code = NotFound desc = could not find container \"e44913d77dea4c784942d241be008d31ae6928fdda888d591ae8eaa387359709\": container with ID starting with e44913d77dea4c784942d241be008d31ae6928fdda888d591ae8eaa387359709 not found: ID does not exist" Feb 18 01:40:29 crc kubenswrapper[4858]: I0218 01:40:29.672176 4858 scope.go:117] "RemoveContainer" containerID="987361d2c158e36011fa4f76fbe0f86b3d38361e15c130e1a3f482c2c5bbb91b" Feb 18 01:40:29 crc kubenswrapper[4858]: E0218 01:40:29.672659 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"987361d2c158e36011fa4f76fbe0f86b3d38361e15c130e1a3f482c2c5bbb91b\": container with ID starting with 987361d2c158e36011fa4f76fbe0f86b3d38361e15c130e1a3f482c2c5bbb91b not found: ID does not exist" containerID="987361d2c158e36011fa4f76fbe0f86b3d38361e15c130e1a3f482c2c5bbb91b" Feb 18 01:40:29 crc kubenswrapper[4858]: I0218 01:40:29.672681 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"987361d2c158e36011fa4f76fbe0f86b3d38361e15c130e1a3f482c2c5bbb91b"} err="failed to get container status \"987361d2c158e36011fa4f76fbe0f86b3d38361e15c130e1a3f482c2c5bbb91b\": rpc error: code = NotFound desc = could not find container \"987361d2c158e36011fa4f76fbe0f86b3d38361e15c130e1a3f482c2c5bbb91b\": container with ID starting with 987361d2c158e36011fa4f76fbe0f86b3d38361e15c130e1a3f482c2c5bbb91b not found: ID does not exist" Feb 18 01:40:29 crc kubenswrapper[4858]: I0218 01:40:29.899879 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7gzqm"] Feb 18 01:40:29 crc kubenswrapper[4858]: I0218 01:40:29.908106 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7gzqm"] Feb 18 01:40:31 crc kubenswrapper[4858]: E0218 01:40:31.422848 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:40:31 crc kubenswrapper[4858]: I0218 01:40:31.439520 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ae26f23-9145-485a-8d82-66450c8a8254" path="/var/lib/kubelet/pods/6ae26f23-9145-485a-8d82-66450c8a8254/volumes" Feb 18 01:40:42 crc kubenswrapper[4858]: E0218 01:40:42.422692 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:40:42 crc kubenswrapper[4858]: E0218 01:40:42.422837 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:40:53 crc kubenswrapper[4858]: E0218 01:40:53.424481 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:40:54 crc kubenswrapper[4858]: E0218 01:40:54.421296 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:41:04 crc kubenswrapper[4858]: E0218 01:41:04.422402 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:41:07 crc kubenswrapper[4858]: E0218 01:41:07.436367 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:41:19 crc kubenswrapper[4858]: E0218 01:41:19.422571 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:41:20 crc kubenswrapper[4858]: E0218 01:41:20.421220 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:41:31 crc kubenswrapper[4858]: E0218 01:41:31.421443 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:41:32 crc kubenswrapper[4858]: E0218 01:41:32.420241 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:41:45 crc kubenswrapper[4858]: E0218 01:41:45.423993 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:41:47 crc kubenswrapper[4858]: E0218 01:41:47.432435 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:41:52 crc kubenswrapper[4858]: I0218 01:41:52.767430 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="a845f908-18e9-47e2-bc4f-01308c8a69b3" containerName="galera" probeResult="failure" output="command timed out" Feb 18 01:41:55 crc kubenswrapper[4858]: I0218 01:41:55.265003 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:41:55 crc kubenswrapper[4858]: I0218 01:41:55.265344 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:41:56 crc kubenswrapper[4858]: E0218 01:41:56.420996 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:41:59 crc kubenswrapper[4858]: E0218 01:41:59.422706 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:42:10 crc kubenswrapper[4858]: E0218 01:42:10.421548 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:42:10 crc kubenswrapper[4858]: E0218 01:42:10.423093 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:42:21 crc kubenswrapper[4858]: E0218 01:42:21.423003 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:42:24 crc kubenswrapper[4858]: E0218 01:42:24.421419 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:42:25 crc kubenswrapper[4858]: I0218 01:42:25.265447 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:42:25 crc kubenswrapper[4858]: I0218 01:42:25.265903 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:42:33 crc kubenswrapper[4858]: E0218 01:42:33.422988 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:42:38 crc kubenswrapper[4858]: E0218 01:42:38.425780 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:42:44 crc kubenswrapper[4858]: E0218 01:42:44.424361 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:42:50 crc kubenswrapper[4858]: E0218 01:42:50.421674 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:42:55 crc kubenswrapper[4858]: I0218 01:42:55.265725 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:42:55 crc kubenswrapper[4858]: I0218 01:42:55.266357 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:42:55 crc kubenswrapper[4858]: I0218 01:42:55.266408 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 01:42:55 crc kubenswrapper[4858]: I0218 01:42:55.267180 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"068364f64e37a652fbb7f5b1e3f6cc4f127ff5583f5759450d7d6ed6d0ee50a9"} pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 01:42:55 crc kubenswrapper[4858]: I0218 01:42:55.267237 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" containerID="cri-o://068364f64e37a652fbb7f5b1e3f6cc4f127ff5583f5759450d7d6ed6d0ee50a9" gracePeriod=600 Feb 18 01:42:56 crc kubenswrapper[4858]: I0218 01:42:56.248585 4858 generic.go:334] "Generic (PLEG): container finished" podID="7172df49-6116-4968-a2b5-a1afb116568b" containerID="068364f64e37a652fbb7f5b1e3f6cc4f127ff5583f5759450d7d6ed6d0ee50a9" exitCode=0 Feb 18 01:42:56 crc kubenswrapper[4858]: I0218 01:42:56.248654 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerDied","Data":"068364f64e37a652fbb7f5b1e3f6cc4f127ff5583f5759450d7d6ed6d0ee50a9"} Feb 18 01:42:56 crc kubenswrapper[4858]: I0218 01:42:56.249243 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerStarted","Data":"deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49"} Feb 18 01:42:56 crc kubenswrapper[4858]: I0218 01:42:56.249266 4858 scope.go:117] "RemoveContainer" containerID="307d40f413f38c929a75fad830ddd9f94cfaf10daaceff3b871bfb31bd478fc7" Feb 18 01:42:56 crc kubenswrapper[4858]: E0218 01:42:56.421411 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:43:05 crc kubenswrapper[4858]: E0218 01:43:05.423028 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:43:08 crc kubenswrapper[4858]: E0218 01:43:08.421892 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:43:09 crc kubenswrapper[4858]: I0218 01:43:09.858002 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qlnzf"] Feb 18 01:43:09 crc kubenswrapper[4858]: E0218 01:43:09.858979 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ae26f23-9145-485a-8d82-66450c8a8254" containerName="registry-server" Feb 18 01:43:09 crc kubenswrapper[4858]: I0218 01:43:09.859003 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae26f23-9145-485a-8d82-66450c8a8254" containerName="registry-server" Feb 18 01:43:09 crc kubenswrapper[4858]: E0218 01:43:09.859023 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ae26f23-9145-485a-8d82-66450c8a8254" containerName="extract-content" Feb 18 01:43:09 crc kubenswrapper[4858]: I0218 01:43:09.859036 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae26f23-9145-485a-8d82-66450c8a8254" containerName="extract-content" Feb 18 01:43:09 crc kubenswrapper[4858]: E0218 01:43:09.859081 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ae26f23-9145-485a-8d82-66450c8a8254" containerName="extract-utilities" Feb 18 01:43:09 crc kubenswrapper[4858]: I0218 01:43:09.859093 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae26f23-9145-485a-8d82-66450c8a8254" containerName="extract-utilities" Feb 18 01:43:09 crc kubenswrapper[4858]: I0218 01:43:09.859431 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ae26f23-9145-485a-8d82-66450c8a8254" containerName="registry-server" Feb 18 01:43:09 crc kubenswrapper[4858]: I0218 01:43:09.861833 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qlnzf" Feb 18 01:43:09 crc kubenswrapper[4858]: I0218 01:43:09.898827 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qlnzf"] Feb 18 01:43:09 crc kubenswrapper[4858]: I0218 01:43:09.987298 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzq7h\" (UniqueName: \"kubernetes.io/projected/b54dbed0-8212-440d-82da-2819cde38c72-kube-api-access-pzq7h\") pod \"community-operators-qlnzf\" (UID: \"b54dbed0-8212-440d-82da-2819cde38c72\") " pod="openshift-marketplace/community-operators-qlnzf" Feb 18 01:43:09 crc kubenswrapper[4858]: I0218 01:43:09.987552 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b54dbed0-8212-440d-82da-2819cde38c72-catalog-content\") pod \"community-operators-qlnzf\" (UID: \"b54dbed0-8212-440d-82da-2819cde38c72\") " pod="openshift-marketplace/community-operators-qlnzf" Feb 18 01:43:09 crc kubenswrapper[4858]: I0218 01:43:09.988092 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b54dbed0-8212-440d-82da-2819cde38c72-utilities\") pod \"community-operators-qlnzf\" (UID: \"b54dbed0-8212-440d-82da-2819cde38c72\") " pod="openshift-marketplace/community-operators-qlnzf" Feb 18 01:43:10 crc kubenswrapper[4858]: I0218 01:43:10.089892 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b54dbed0-8212-440d-82da-2819cde38c72-utilities\") pod \"community-operators-qlnzf\" (UID: \"b54dbed0-8212-440d-82da-2819cde38c72\") " pod="openshift-marketplace/community-operators-qlnzf" Feb 18 01:43:10 crc kubenswrapper[4858]: I0218 01:43:10.089974 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzq7h\" (UniqueName: \"kubernetes.io/projected/b54dbed0-8212-440d-82da-2819cde38c72-kube-api-access-pzq7h\") pod \"community-operators-qlnzf\" (UID: \"b54dbed0-8212-440d-82da-2819cde38c72\") " pod="openshift-marketplace/community-operators-qlnzf" Feb 18 01:43:10 crc kubenswrapper[4858]: I0218 01:43:10.090024 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b54dbed0-8212-440d-82da-2819cde38c72-catalog-content\") pod \"community-operators-qlnzf\" (UID: \"b54dbed0-8212-440d-82da-2819cde38c72\") " pod="openshift-marketplace/community-operators-qlnzf" Feb 18 01:43:10 crc kubenswrapper[4858]: I0218 01:43:10.090469 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b54dbed0-8212-440d-82da-2819cde38c72-utilities\") pod \"community-operators-qlnzf\" (UID: \"b54dbed0-8212-440d-82da-2819cde38c72\") " pod="openshift-marketplace/community-operators-qlnzf" Feb 18 01:43:10 crc kubenswrapper[4858]: I0218 01:43:10.090558 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b54dbed0-8212-440d-82da-2819cde38c72-catalog-content\") pod \"community-operators-qlnzf\" (UID: \"b54dbed0-8212-440d-82da-2819cde38c72\") " pod="openshift-marketplace/community-operators-qlnzf" Feb 18 01:43:10 crc kubenswrapper[4858]: I0218 01:43:10.110581 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzq7h\" (UniqueName: \"kubernetes.io/projected/b54dbed0-8212-440d-82da-2819cde38c72-kube-api-access-pzq7h\") pod \"community-operators-qlnzf\" (UID: \"b54dbed0-8212-440d-82da-2819cde38c72\") " pod="openshift-marketplace/community-operators-qlnzf" Feb 18 01:43:10 crc kubenswrapper[4858]: I0218 01:43:10.192361 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qlnzf" Feb 18 01:43:11 crc kubenswrapper[4858]: I0218 01:43:11.317409 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qlnzf"] Feb 18 01:43:11 crc kubenswrapper[4858]: I0218 01:43:11.413846 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlnzf" event={"ID":"b54dbed0-8212-440d-82da-2819cde38c72","Type":"ContainerStarted","Data":"d68140088647f554b7ff755879843f47ad729b802d9ba2e56e5cdd0bc9f650e5"} Feb 18 01:43:12 crc kubenswrapper[4858]: I0218 01:43:12.424876 4858 generic.go:334] "Generic (PLEG): container finished" podID="b54dbed0-8212-440d-82da-2819cde38c72" containerID="73b06a2841b4e9002c1ffd493937b7a4786937feb27d2b70cbce32a830f9fe2c" exitCode=0 Feb 18 01:43:12 crc kubenswrapper[4858]: I0218 01:43:12.424959 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlnzf" event={"ID":"b54dbed0-8212-440d-82da-2819cde38c72","Type":"ContainerDied","Data":"73b06a2841b4e9002c1ffd493937b7a4786937feb27d2b70cbce32a830f9fe2c"} Feb 18 01:43:13 crc kubenswrapper[4858]: I0218 01:43:13.438536 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlnzf" event={"ID":"b54dbed0-8212-440d-82da-2819cde38c72","Type":"ContainerStarted","Data":"b488ff2743b65ec4bad981991efba51dbc8699ace4ff1f2e5cc7f9c4322d363d"} Feb 18 01:43:14 crc kubenswrapper[4858]: I0218 01:43:14.452314 4858 generic.go:334] "Generic (PLEG): container finished" podID="b54dbed0-8212-440d-82da-2819cde38c72" containerID="b488ff2743b65ec4bad981991efba51dbc8699ace4ff1f2e5cc7f9c4322d363d" exitCode=0 Feb 18 01:43:14 crc kubenswrapper[4858]: I0218 01:43:14.452414 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlnzf" event={"ID":"b54dbed0-8212-440d-82da-2819cde38c72","Type":"ContainerDied","Data":"b488ff2743b65ec4bad981991efba51dbc8699ace4ff1f2e5cc7f9c4322d363d"} Feb 18 01:43:15 crc kubenswrapper[4858]: I0218 01:43:15.464477 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlnzf" event={"ID":"b54dbed0-8212-440d-82da-2819cde38c72","Type":"ContainerStarted","Data":"d8fae66ecc4863276e9f13507b91579f3ac4812c146dd3285eae37fa3c6cfb26"} Feb 18 01:43:15 crc kubenswrapper[4858]: I0218 01:43:15.530102 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qlnzf" podStartSLOduration=4.047468609 podStartE2EDuration="6.53007688s" podCreationTimestamp="2026-02-18 01:43:09 +0000 UTC" firstStartedPulling="2026-02-18 01:43:12.426861508 +0000 UTC m=+4145.732698250" lastFinishedPulling="2026-02-18 01:43:14.909469759 +0000 UTC m=+4148.215306521" observedRunningTime="2026-02-18 01:43:15.522151223 +0000 UTC m=+4148.827987985" watchObservedRunningTime="2026-02-18 01:43:15.53007688 +0000 UTC m=+4148.835913632" Feb 18 01:43:17 crc kubenswrapper[4858]: E0218 01:43:17.429089 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:43:19 crc kubenswrapper[4858]: E0218 01:43:19.422450 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:43:20 crc kubenswrapper[4858]: I0218 01:43:20.193740 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qlnzf" Feb 18 01:43:20 crc kubenswrapper[4858]: I0218 01:43:20.193786 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qlnzf" Feb 18 01:43:20 crc kubenswrapper[4858]: I0218 01:43:20.260827 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qlnzf" Feb 18 01:43:20 crc kubenswrapper[4858]: I0218 01:43:20.562170 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qlnzf" Feb 18 01:43:20 crc kubenswrapper[4858]: I0218 01:43:20.624378 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qlnzf"] Feb 18 01:43:22 crc kubenswrapper[4858]: I0218 01:43:22.541566 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qlnzf" podUID="b54dbed0-8212-440d-82da-2819cde38c72" containerName="registry-server" containerID="cri-o://d8fae66ecc4863276e9f13507b91579f3ac4812c146dd3285eae37fa3c6cfb26" gracePeriod=2 Feb 18 01:43:23 crc kubenswrapper[4858]: I0218 01:43:23.137838 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qlnzf" Feb 18 01:43:23 crc kubenswrapper[4858]: I0218 01:43:23.212968 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b54dbed0-8212-440d-82da-2819cde38c72-catalog-content\") pod \"b54dbed0-8212-440d-82da-2819cde38c72\" (UID: \"b54dbed0-8212-440d-82da-2819cde38c72\") " Feb 18 01:43:23 crc kubenswrapper[4858]: I0218 01:43:23.213092 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzq7h\" (UniqueName: \"kubernetes.io/projected/b54dbed0-8212-440d-82da-2819cde38c72-kube-api-access-pzq7h\") pod \"b54dbed0-8212-440d-82da-2819cde38c72\" (UID: \"b54dbed0-8212-440d-82da-2819cde38c72\") " Feb 18 01:43:23 crc kubenswrapper[4858]: I0218 01:43:23.214043 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b54dbed0-8212-440d-82da-2819cde38c72-utilities\") pod \"b54dbed0-8212-440d-82da-2819cde38c72\" (UID: \"b54dbed0-8212-440d-82da-2819cde38c72\") " Feb 18 01:43:23 crc kubenswrapper[4858]: I0218 01:43:23.215249 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b54dbed0-8212-440d-82da-2819cde38c72-utilities" (OuterVolumeSpecName: "utilities") pod "b54dbed0-8212-440d-82da-2819cde38c72" (UID: "b54dbed0-8212-440d-82da-2819cde38c72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:43:23 crc kubenswrapper[4858]: I0218 01:43:23.219854 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b54dbed0-8212-440d-82da-2819cde38c72-kube-api-access-pzq7h" (OuterVolumeSpecName: "kube-api-access-pzq7h") pod "b54dbed0-8212-440d-82da-2819cde38c72" (UID: "b54dbed0-8212-440d-82da-2819cde38c72"). InnerVolumeSpecName "kube-api-access-pzq7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:43:23 crc kubenswrapper[4858]: I0218 01:43:23.268277 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b54dbed0-8212-440d-82da-2819cde38c72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b54dbed0-8212-440d-82da-2819cde38c72" (UID: "b54dbed0-8212-440d-82da-2819cde38c72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:43:23 crc kubenswrapper[4858]: I0218 01:43:23.316249 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b54dbed0-8212-440d-82da-2819cde38c72-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:43:23 crc kubenswrapper[4858]: I0218 01:43:23.316276 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzq7h\" (UniqueName: \"kubernetes.io/projected/b54dbed0-8212-440d-82da-2819cde38c72-kube-api-access-pzq7h\") on node \"crc\" DevicePath \"\"" Feb 18 01:43:23 crc kubenswrapper[4858]: I0218 01:43:23.316286 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b54dbed0-8212-440d-82da-2819cde38c72-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:43:23 crc kubenswrapper[4858]: I0218 01:43:23.552572 4858 generic.go:334] "Generic (PLEG): container finished" podID="b54dbed0-8212-440d-82da-2819cde38c72" containerID="d8fae66ecc4863276e9f13507b91579f3ac4812c146dd3285eae37fa3c6cfb26" exitCode=0 Feb 18 01:43:23 crc kubenswrapper[4858]: I0218 01:43:23.552673 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qlnzf" Feb 18 01:43:23 crc kubenswrapper[4858]: I0218 01:43:23.554292 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlnzf" event={"ID":"b54dbed0-8212-440d-82da-2819cde38c72","Type":"ContainerDied","Data":"d8fae66ecc4863276e9f13507b91579f3ac4812c146dd3285eae37fa3c6cfb26"} Feb 18 01:43:23 crc kubenswrapper[4858]: I0218 01:43:23.554812 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlnzf" event={"ID":"b54dbed0-8212-440d-82da-2819cde38c72","Type":"ContainerDied","Data":"d68140088647f554b7ff755879843f47ad729b802d9ba2e56e5cdd0bc9f650e5"} Feb 18 01:43:23 crc kubenswrapper[4858]: I0218 01:43:23.554895 4858 scope.go:117] "RemoveContainer" containerID="d8fae66ecc4863276e9f13507b91579f3ac4812c146dd3285eae37fa3c6cfb26" Feb 18 01:43:23 crc kubenswrapper[4858]: I0218 01:43:23.601590 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qlnzf"] Feb 18 01:43:23 crc kubenswrapper[4858]: I0218 01:43:23.607366 4858 scope.go:117] "RemoveContainer" containerID="b488ff2743b65ec4bad981991efba51dbc8699ace4ff1f2e5cc7f9c4322d363d" Feb 18 01:43:23 crc kubenswrapper[4858]: I0218 01:43:23.616379 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qlnzf"] Feb 18 01:43:23 crc kubenswrapper[4858]: I0218 01:43:23.700778 4858 scope.go:117] "RemoveContainer" containerID="73b06a2841b4e9002c1ffd493937b7a4786937feb27d2b70cbce32a830f9fe2c" Feb 18 01:43:23 crc kubenswrapper[4858]: I0218 01:43:23.744511 4858 scope.go:117] "RemoveContainer" containerID="d8fae66ecc4863276e9f13507b91579f3ac4812c146dd3285eae37fa3c6cfb26" Feb 18 01:43:23 crc kubenswrapper[4858]: E0218 01:43:23.745356 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8fae66ecc4863276e9f13507b91579f3ac4812c146dd3285eae37fa3c6cfb26\": container with ID starting with d8fae66ecc4863276e9f13507b91579f3ac4812c146dd3285eae37fa3c6cfb26 not found: ID does not exist" containerID="d8fae66ecc4863276e9f13507b91579f3ac4812c146dd3285eae37fa3c6cfb26" Feb 18 01:43:23 crc kubenswrapper[4858]: I0218 01:43:23.745397 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8fae66ecc4863276e9f13507b91579f3ac4812c146dd3285eae37fa3c6cfb26"} err="failed to get container status \"d8fae66ecc4863276e9f13507b91579f3ac4812c146dd3285eae37fa3c6cfb26\": rpc error: code = NotFound desc = could not find container \"d8fae66ecc4863276e9f13507b91579f3ac4812c146dd3285eae37fa3c6cfb26\": container with ID starting with d8fae66ecc4863276e9f13507b91579f3ac4812c146dd3285eae37fa3c6cfb26 not found: ID does not exist" Feb 18 01:43:23 crc kubenswrapper[4858]: I0218 01:43:23.745422 4858 scope.go:117] "RemoveContainer" containerID="b488ff2743b65ec4bad981991efba51dbc8699ace4ff1f2e5cc7f9c4322d363d" Feb 18 01:43:23 crc kubenswrapper[4858]: E0218 01:43:23.745730 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b488ff2743b65ec4bad981991efba51dbc8699ace4ff1f2e5cc7f9c4322d363d\": container with ID starting with b488ff2743b65ec4bad981991efba51dbc8699ace4ff1f2e5cc7f9c4322d363d not found: ID does not exist" containerID="b488ff2743b65ec4bad981991efba51dbc8699ace4ff1f2e5cc7f9c4322d363d" Feb 18 01:43:23 crc kubenswrapper[4858]: I0218 01:43:23.745749 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b488ff2743b65ec4bad981991efba51dbc8699ace4ff1f2e5cc7f9c4322d363d"} err="failed to get container status \"b488ff2743b65ec4bad981991efba51dbc8699ace4ff1f2e5cc7f9c4322d363d\": rpc error: code = NotFound desc = could not find container \"b488ff2743b65ec4bad981991efba51dbc8699ace4ff1f2e5cc7f9c4322d363d\": container with ID starting with b488ff2743b65ec4bad981991efba51dbc8699ace4ff1f2e5cc7f9c4322d363d not found: ID does not exist" Feb 18 01:43:23 crc kubenswrapper[4858]: I0218 01:43:23.745763 4858 scope.go:117] "RemoveContainer" containerID="73b06a2841b4e9002c1ffd493937b7a4786937feb27d2b70cbce32a830f9fe2c" Feb 18 01:43:23 crc kubenswrapper[4858]: E0218 01:43:23.746889 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73b06a2841b4e9002c1ffd493937b7a4786937feb27d2b70cbce32a830f9fe2c\": container with ID starting with 73b06a2841b4e9002c1ffd493937b7a4786937feb27d2b70cbce32a830f9fe2c not found: ID does not exist" containerID="73b06a2841b4e9002c1ffd493937b7a4786937feb27d2b70cbce32a830f9fe2c" Feb 18 01:43:23 crc kubenswrapper[4858]: I0218 01:43:23.746910 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73b06a2841b4e9002c1ffd493937b7a4786937feb27d2b70cbce32a830f9fe2c"} err="failed to get container status \"73b06a2841b4e9002c1ffd493937b7a4786937feb27d2b70cbce32a830f9fe2c\": rpc error: code = NotFound desc = could not find container \"73b06a2841b4e9002c1ffd493937b7a4786937feb27d2b70cbce32a830f9fe2c\": container with ID starting with 73b06a2841b4e9002c1ffd493937b7a4786937feb27d2b70cbce32a830f9fe2c not found: ID does not exist" Feb 18 01:43:25 crc kubenswrapper[4858]: I0218 01:43:25.435741 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b54dbed0-8212-440d-82da-2819cde38c72" path="/var/lib/kubelet/pods/b54dbed0-8212-440d-82da-2819cde38c72/volumes" Feb 18 01:43:28 crc kubenswrapper[4858]: E0218 01:43:28.421808 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:43:34 crc kubenswrapper[4858]: E0218 01:43:34.421839 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:43:40 crc kubenswrapper[4858]: I0218 01:43:40.423335 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 01:43:40 crc kubenswrapper[4858]: E0218 01:43:40.537338 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 01:43:40 crc kubenswrapper[4858]: E0218 01:43:40.537415 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 01:43:40 crc kubenswrapper[4858]: E0218 01:43:40.537650 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t22t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-h2mps_openstack(8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:43:40 crc kubenswrapper[4858]: E0218 01:43:40.538944 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:43:47 crc kubenswrapper[4858]: E0218 01:43:47.570402 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:43:47 crc kubenswrapper[4858]: E0218 01:43:47.570906 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:43:47 crc kubenswrapper[4858]: E0218 01:43:47.571046 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n75h65dh56ch8chbh597h687h696h57bhdchcbhb6h9h648hb6h686h666h57ch557h55ch68ch76h686h5f7hb7hc6h68hdh67fh8bhd7hbcq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6qb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(1b28954c-8d35-4f43-a44b-307a56f6fff5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:43:47 crc kubenswrapper[4858]: E0218 01:43:47.572134 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:43:53 crc kubenswrapper[4858]: E0218 01:43:53.421783 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:43:58 crc kubenswrapper[4858]: E0218 01:43:58.423769 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:44:04 crc kubenswrapper[4858]: E0218 01:44:04.422865 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:44:09 crc kubenswrapper[4858]: E0218 01:44:09.421798 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:44:17 crc kubenswrapper[4858]: E0218 01:44:17.433100 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:44:20 crc kubenswrapper[4858]: E0218 01:44:20.421180 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:44:30 crc kubenswrapper[4858]: E0218 01:44:30.420802 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:44:32 crc kubenswrapper[4858]: E0218 01:44:32.422537 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:44:44 crc kubenswrapper[4858]: E0218 01:44:44.421208 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:44:44 crc kubenswrapper[4858]: E0218 01:44:44.421195 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:44:55 crc kubenswrapper[4858]: I0218 01:44:55.265065 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:44:55 crc kubenswrapper[4858]: I0218 01:44:55.265733 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:44:55 crc kubenswrapper[4858]: E0218 01:44:55.425185 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:44:57 crc kubenswrapper[4858]: E0218 01:44:57.427322 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:45:00 crc kubenswrapper[4858]: I0218 01:45:00.188052 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522985-g6smj"] Feb 18 01:45:00 crc kubenswrapper[4858]: E0218 01:45:00.188894 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b54dbed0-8212-440d-82da-2819cde38c72" containerName="registry-server" Feb 18 01:45:00 crc kubenswrapper[4858]: I0218 01:45:00.188912 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b54dbed0-8212-440d-82da-2819cde38c72" containerName="registry-server" Feb 18 01:45:00 crc kubenswrapper[4858]: E0218 01:45:00.188937 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b54dbed0-8212-440d-82da-2819cde38c72" containerName="extract-content" Feb 18 01:45:00 crc kubenswrapper[4858]: I0218 01:45:00.188945 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b54dbed0-8212-440d-82da-2819cde38c72" containerName="extract-content" Feb 18 01:45:00 crc kubenswrapper[4858]: E0218 01:45:00.188982 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b54dbed0-8212-440d-82da-2819cde38c72" containerName="extract-utilities" Feb 18 01:45:00 crc kubenswrapper[4858]: I0218 01:45:00.188991 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b54dbed0-8212-440d-82da-2819cde38c72" containerName="extract-utilities" Feb 18 01:45:00 crc kubenswrapper[4858]: I0218 01:45:00.189444 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b54dbed0-8212-440d-82da-2819cde38c72" containerName="registry-server" Feb 18 01:45:00 crc kubenswrapper[4858]: I0218 01:45:00.190389 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-g6smj" Feb 18 01:45:00 crc kubenswrapper[4858]: I0218 01:45:00.193549 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 01:45:00 crc kubenswrapper[4858]: I0218 01:45:00.197700 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 01:45:00 crc kubenswrapper[4858]: I0218 01:45:00.204267 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522985-g6smj"] Feb 18 01:45:00 crc kubenswrapper[4858]: I0218 01:45:00.249629 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a1fdd56-5406-42cf-bb12-46c3f9fbfe63-secret-volume\") pod \"collect-profiles-29522985-g6smj\" (UID: \"2a1fdd56-5406-42cf-bb12-46c3f9fbfe63\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-g6smj" Feb 18 01:45:00 crc kubenswrapper[4858]: I0218 01:45:00.249744 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpcqb\" (UniqueName: \"kubernetes.io/projected/2a1fdd56-5406-42cf-bb12-46c3f9fbfe63-kube-api-access-vpcqb\") pod \"collect-profiles-29522985-g6smj\" (UID: \"2a1fdd56-5406-42cf-bb12-46c3f9fbfe63\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-g6smj" Feb 18 01:45:00 crc kubenswrapper[4858]: I0218 01:45:00.249833 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a1fdd56-5406-42cf-bb12-46c3f9fbfe63-config-volume\") pod \"collect-profiles-29522985-g6smj\" (UID: \"2a1fdd56-5406-42cf-bb12-46c3f9fbfe63\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-g6smj" Feb 18 01:45:00 crc kubenswrapper[4858]: I0218 01:45:00.354128 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a1fdd56-5406-42cf-bb12-46c3f9fbfe63-secret-volume\") pod \"collect-profiles-29522985-g6smj\" (UID: \"2a1fdd56-5406-42cf-bb12-46c3f9fbfe63\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-g6smj" Feb 18 01:45:00 crc kubenswrapper[4858]: I0218 01:45:00.354216 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpcqb\" (UniqueName: \"kubernetes.io/projected/2a1fdd56-5406-42cf-bb12-46c3f9fbfe63-kube-api-access-vpcqb\") pod \"collect-profiles-29522985-g6smj\" (UID: \"2a1fdd56-5406-42cf-bb12-46c3f9fbfe63\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-g6smj" Feb 18 01:45:00 crc kubenswrapper[4858]: I0218 01:45:00.354277 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a1fdd56-5406-42cf-bb12-46c3f9fbfe63-config-volume\") pod \"collect-profiles-29522985-g6smj\" (UID: \"2a1fdd56-5406-42cf-bb12-46c3f9fbfe63\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-g6smj" Feb 18 01:45:00 crc kubenswrapper[4858]: I0218 01:45:00.355322 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a1fdd56-5406-42cf-bb12-46c3f9fbfe63-config-volume\") pod \"collect-profiles-29522985-g6smj\" (UID: \"2a1fdd56-5406-42cf-bb12-46c3f9fbfe63\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-g6smj" Feb 18 01:45:00 crc kubenswrapper[4858]: I0218 01:45:00.367360 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a1fdd56-5406-42cf-bb12-46c3f9fbfe63-secret-volume\") pod \"collect-profiles-29522985-g6smj\" (UID: \"2a1fdd56-5406-42cf-bb12-46c3f9fbfe63\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-g6smj" Feb 18 01:45:00 crc kubenswrapper[4858]: I0218 01:45:00.381519 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpcqb\" (UniqueName: \"kubernetes.io/projected/2a1fdd56-5406-42cf-bb12-46c3f9fbfe63-kube-api-access-vpcqb\") pod \"collect-profiles-29522985-g6smj\" (UID: \"2a1fdd56-5406-42cf-bb12-46c3f9fbfe63\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-g6smj" Feb 18 01:45:00 crc kubenswrapper[4858]: I0218 01:45:00.516224 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-g6smj" Feb 18 01:45:01 crc kubenswrapper[4858]: I0218 01:45:01.028596 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522985-g6smj"] Feb 18 01:45:01 crc kubenswrapper[4858]: W0218 01:45:01.034115 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a1fdd56_5406_42cf_bb12_46c3f9fbfe63.slice/crio-ee8fa417ca5a43fd434579ed96b2cbb537ee1934779983a2f2c7fb2e971092eb WatchSource:0}: Error finding container ee8fa417ca5a43fd434579ed96b2cbb537ee1934779983a2f2c7fb2e971092eb: Status 404 returned error can't find the container with id ee8fa417ca5a43fd434579ed96b2cbb537ee1934779983a2f2c7fb2e971092eb Feb 18 01:45:01 crc kubenswrapper[4858]: I0218 01:45:01.708799 4858 generic.go:334] "Generic (PLEG): container finished" podID="2a1fdd56-5406-42cf-bb12-46c3f9fbfe63" containerID="e9c785d4bcb2ee68001a901747128c9a25eec29a5891cfda36c4aa0c1767d5fa" exitCode=0 Feb 18 01:45:01 crc kubenswrapper[4858]: I0218 01:45:01.708926 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-g6smj" event={"ID":"2a1fdd56-5406-42cf-bb12-46c3f9fbfe63","Type":"ContainerDied","Data":"e9c785d4bcb2ee68001a901747128c9a25eec29a5891cfda36c4aa0c1767d5fa"} Feb 18 01:45:01 crc kubenswrapper[4858]: I0218 01:45:01.709173 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-g6smj" event={"ID":"2a1fdd56-5406-42cf-bb12-46c3f9fbfe63","Type":"ContainerStarted","Data":"ee8fa417ca5a43fd434579ed96b2cbb537ee1934779983a2f2c7fb2e971092eb"} Feb 18 01:45:03 crc kubenswrapper[4858]: I0218 01:45:03.295441 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-g6smj" Feb 18 01:45:03 crc kubenswrapper[4858]: I0218 01:45:03.424178 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpcqb\" (UniqueName: \"kubernetes.io/projected/2a1fdd56-5406-42cf-bb12-46c3f9fbfe63-kube-api-access-vpcqb\") pod \"2a1fdd56-5406-42cf-bb12-46c3f9fbfe63\" (UID: \"2a1fdd56-5406-42cf-bb12-46c3f9fbfe63\") " Feb 18 01:45:03 crc kubenswrapper[4858]: I0218 01:45:03.424312 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a1fdd56-5406-42cf-bb12-46c3f9fbfe63-config-volume\") pod \"2a1fdd56-5406-42cf-bb12-46c3f9fbfe63\" (UID: \"2a1fdd56-5406-42cf-bb12-46c3f9fbfe63\") " Feb 18 01:45:03 crc kubenswrapper[4858]: I0218 01:45:03.424394 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a1fdd56-5406-42cf-bb12-46c3f9fbfe63-secret-volume\") pod \"2a1fdd56-5406-42cf-bb12-46c3f9fbfe63\" (UID: \"2a1fdd56-5406-42cf-bb12-46c3f9fbfe63\") " Feb 18 01:45:03 crc kubenswrapper[4858]: I0218 01:45:03.424984 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a1fdd56-5406-42cf-bb12-46c3f9fbfe63-config-volume" (OuterVolumeSpecName: "config-volume") pod "2a1fdd56-5406-42cf-bb12-46c3f9fbfe63" (UID: "2a1fdd56-5406-42cf-bb12-46c3f9fbfe63"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 01:45:03 crc kubenswrapper[4858]: I0218 01:45:03.425403 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a1fdd56-5406-42cf-bb12-46c3f9fbfe63-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 01:45:03 crc kubenswrapper[4858]: I0218 01:45:03.431549 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a1fdd56-5406-42cf-bb12-46c3f9fbfe63-kube-api-access-vpcqb" (OuterVolumeSpecName: "kube-api-access-vpcqb") pod "2a1fdd56-5406-42cf-bb12-46c3f9fbfe63" (UID: "2a1fdd56-5406-42cf-bb12-46c3f9fbfe63"). InnerVolumeSpecName "kube-api-access-vpcqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:45:03 crc kubenswrapper[4858]: I0218 01:45:03.453688 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a1fdd56-5406-42cf-bb12-46c3f9fbfe63-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2a1fdd56-5406-42cf-bb12-46c3f9fbfe63" (UID: "2a1fdd56-5406-42cf-bb12-46c3f9fbfe63"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:45:03 crc kubenswrapper[4858]: I0218 01:45:03.526860 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a1fdd56-5406-42cf-bb12-46c3f9fbfe63-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 01:45:03 crc kubenswrapper[4858]: I0218 01:45:03.527057 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpcqb\" (UniqueName: \"kubernetes.io/projected/2a1fdd56-5406-42cf-bb12-46c3f9fbfe63-kube-api-access-vpcqb\") on node \"crc\" DevicePath \"\"" Feb 18 01:45:03 crc kubenswrapper[4858]: I0218 01:45:03.738305 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-g6smj" Feb 18 01:45:03 crc kubenswrapper[4858]: I0218 01:45:03.738308 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-g6smj" event={"ID":"2a1fdd56-5406-42cf-bb12-46c3f9fbfe63","Type":"ContainerDied","Data":"ee8fa417ca5a43fd434579ed96b2cbb537ee1934779983a2f2c7fb2e971092eb"} Feb 18 01:45:03 crc kubenswrapper[4858]: I0218 01:45:03.738478 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee8fa417ca5a43fd434579ed96b2cbb537ee1934779983a2f2c7fb2e971092eb" Feb 18 01:45:03 crc kubenswrapper[4858]: I0218 01:45:03.740760 4858 generic.go:334] "Generic (PLEG): container finished" podID="65cd5b4f-e1ce-401d-b2e7-9c622282c342" containerID="9ef348ef56eb0050776c36909b64976169fbd0688bdbe5b6099aa0c44c484296" exitCode=2 Feb 18 01:45:03 crc kubenswrapper[4858]: I0218 01:45:03.740808 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4" event={"ID":"65cd5b4f-e1ce-401d-b2e7-9c622282c342","Type":"ContainerDied","Data":"9ef348ef56eb0050776c36909b64976169fbd0688bdbe5b6099aa0c44c484296"} Feb 18 01:45:04 crc kubenswrapper[4858]: I0218 01:45:04.382485 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522940-ch6tl"] Feb 18 01:45:04 crc kubenswrapper[4858]: I0218 01:45:04.398285 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522940-ch6tl"] Feb 18 01:45:05 crc kubenswrapper[4858]: I0218 01:45:05.417703 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4" Feb 18 01:45:05 crc kubenswrapper[4858]: I0218 01:45:05.430977 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cd2dd73-4a3b-4264-bdac-060f1c49c9e6" path="/var/lib/kubelet/pods/7cd2dd73-4a3b-4264-bdac-060f1c49c9e6/volumes" Feb 18 01:45:05 crc kubenswrapper[4858]: I0218 01:45:05.470779 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65cd5b4f-e1ce-401d-b2e7-9c622282c342-inventory\") pod \"65cd5b4f-e1ce-401d-b2e7-9c622282c342\" (UID: \"65cd5b4f-e1ce-401d-b2e7-9c622282c342\") " Feb 18 01:45:05 crc kubenswrapper[4858]: I0218 01:45:05.470903 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65cd5b4f-e1ce-401d-b2e7-9c622282c342-ssh-key-openstack-edpm-ipam\") pod \"65cd5b4f-e1ce-401d-b2e7-9c622282c342\" (UID: \"65cd5b4f-e1ce-401d-b2e7-9c622282c342\") " Feb 18 01:45:05 crc kubenswrapper[4858]: I0218 01:45:05.470953 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htlnf\" (UniqueName: \"kubernetes.io/projected/65cd5b4f-e1ce-401d-b2e7-9c622282c342-kube-api-access-htlnf\") pod \"65cd5b4f-e1ce-401d-b2e7-9c622282c342\" (UID: \"65cd5b4f-e1ce-401d-b2e7-9c622282c342\") " Feb 18 01:45:05 crc kubenswrapper[4858]: I0218 01:45:05.477920 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65cd5b4f-e1ce-401d-b2e7-9c622282c342-kube-api-access-htlnf" (OuterVolumeSpecName: "kube-api-access-htlnf") pod "65cd5b4f-e1ce-401d-b2e7-9c622282c342" (UID: "65cd5b4f-e1ce-401d-b2e7-9c622282c342"). InnerVolumeSpecName "kube-api-access-htlnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:45:05 crc kubenswrapper[4858]: I0218 01:45:05.501081 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65cd5b4f-e1ce-401d-b2e7-9c622282c342-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "65cd5b4f-e1ce-401d-b2e7-9c622282c342" (UID: "65cd5b4f-e1ce-401d-b2e7-9c622282c342"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:45:05 crc kubenswrapper[4858]: I0218 01:45:05.504110 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65cd5b4f-e1ce-401d-b2e7-9c622282c342-inventory" (OuterVolumeSpecName: "inventory") pod "65cd5b4f-e1ce-401d-b2e7-9c622282c342" (UID: "65cd5b4f-e1ce-401d-b2e7-9c622282c342"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:45:05 crc kubenswrapper[4858]: I0218 01:45:05.574022 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65cd5b4f-e1ce-401d-b2e7-9c622282c342-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 01:45:05 crc kubenswrapper[4858]: I0218 01:45:05.574052 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65cd5b4f-e1ce-401d-b2e7-9c622282c342-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 01:45:05 crc kubenswrapper[4858]: I0218 01:45:05.574067 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htlnf\" (UniqueName: \"kubernetes.io/projected/65cd5b4f-e1ce-401d-b2e7-9c622282c342-kube-api-access-htlnf\") on node \"crc\" DevicePath \"\"" Feb 18 01:45:05 crc kubenswrapper[4858]: I0218 01:45:05.761632 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4" event={"ID":"65cd5b4f-e1ce-401d-b2e7-9c622282c342","Type":"ContainerDied","Data":"2f4e6152f7b148894fc10cf0e31b69ae93adfc55c6d01baa3cef87d428a3b2e3"} Feb 18 01:45:05 crc kubenswrapper[4858]: I0218 01:45:05.761704 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f4e6152f7b148894fc10cf0e31b69ae93adfc55c6d01baa3cef87d428a3b2e3" Feb 18 01:45:05 crc kubenswrapper[4858]: I0218 01:45:05.761702 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4" Feb 18 01:45:08 crc kubenswrapper[4858]: E0218 01:45:08.422346 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:45:09 crc kubenswrapper[4858]: E0218 01:45:09.421168 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:45:20 crc kubenswrapper[4858]: E0218 01:45:20.423912 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:45:22 crc kubenswrapper[4858]: E0218 01:45:22.422567 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:45:25 crc kubenswrapper[4858]: I0218 01:45:25.264884 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:45:25 crc kubenswrapper[4858]: I0218 01:45:25.265555 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:45:32 crc kubenswrapper[4858]: I0218 01:45:32.461746 4858 scope.go:117] "RemoveContainer" containerID="5162e818e0bf9da62026d90d61740137d5447060b453d78eb1d715836d1db42d" Feb 18 01:45:33 crc kubenswrapper[4858]: E0218 01:45:33.421534 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:45:34 crc kubenswrapper[4858]: E0218 01:45:34.422247 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:45:45 crc kubenswrapper[4858]: E0218 01:45:45.422549 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:45:46 crc kubenswrapper[4858]: E0218 01:45:46.422032 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:45:55 crc kubenswrapper[4858]: I0218 01:45:55.265348 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:45:55 crc kubenswrapper[4858]: I0218 01:45:55.266901 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:45:55 crc kubenswrapper[4858]: I0218 01:45:55.267120 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 01:45:55 crc kubenswrapper[4858]: I0218 01:45:55.268116 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49"} pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 01:45:55 crc kubenswrapper[4858]: I0218 01:45:55.268315 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" containerID="cri-o://deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" gracePeriod=600 Feb 18 01:45:55 crc kubenswrapper[4858]: E0218 01:45:55.409135 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:45:56 crc kubenswrapper[4858]: I0218 01:45:56.350337 4858 generic.go:334] "Generic (PLEG): container finished" podID="7172df49-6116-4968-a2b5-a1afb116568b" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" exitCode=0 Feb 18 01:45:56 crc kubenswrapper[4858]: I0218 01:45:56.350403 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerDied","Data":"deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49"} Feb 18 01:45:56 crc kubenswrapper[4858]: I0218 01:45:56.350763 4858 scope.go:117] "RemoveContainer" containerID="068364f64e37a652fbb7f5b1e3f6cc4f127ff5583f5759450d7d6ed6d0ee50a9" Feb 18 01:45:56 crc kubenswrapper[4858]: I0218 01:45:56.351387 4858 scope.go:117] "RemoveContainer" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" Feb 18 01:45:56 crc kubenswrapper[4858]: E0218 01:45:56.351792 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:45:57 crc kubenswrapper[4858]: E0218 01:45:57.420944 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:45:59 crc kubenswrapper[4858]: E0218 01:45:59.422927 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:46:09 crc kubenswrapper[4858]: I0218 01:46:09.420304 4858 scope.go:117] "RemoveContainer" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" Feb 18 01:46:09 crc kubenswrapper[4858]: E0218 01:46:09.421544 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:46:12 crc kubenswrapper[4858]: E0218 01:46:12.422636 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:46:13 crc kubenswrapper[4858]: E0218 01:46:13.421241 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:46:20 crc kubenswrapper[4858]: I0218 01:46:20.420371 4858 scope.go:117] "RemoveContainer" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" Feb 18 01:46:20 crc kubenswrapper[4858]: E0218 01:46:20.421443 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:46:27 crc kubenswrapper[4858]: E0218 01:46:27.441423 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:46:27 crc kubenswrapper[4858]: E0218 01:46:27.442435 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:46:35 crc kubenswrapper[4858]: I0218 01:46:35.419960 4858 scope.go:117] "RemoveContainer" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" Feb 18 01:46:35 crc kubenswrapper[4858]: E0218 01:46:35.420656 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:46:40 crc kubenswrapper[4858]: E0218 01:46:40.422150 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:46:40 crc kubenswrapper[4858]: E0218 01:46:40.422278 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:46:46 crc kubenswrapper[4858]: I0218 01:46:46.421104 4858 scope.go:117] "RemoveContainer" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" Feb 18 01:46:46 crc kubenswrapper[4858]: E0218 01:46:46.422590 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:46:52 crc kubenswrapper[4858]: E0218 01:46:52.424928 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:46:55 crc kubenswrapper[4858]: E0218 01:46:55.422682 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:46:57 crc kubenswrapper[4858]: I0218 01:46:57.460197 4858 scope.go:117] "RemoveContainer" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" Feb 18 01:46:57 crc kubenswrapper[4858]: E0218 01:46:57.461704 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:47:07 crc kubenswrapper[4858]: E0218 01:47:07.433015 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:47:08 crc kubenswrapper[4858]: E0218 01:47:08.422103 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:47:11 crc kubenswrapper[4858]: I0218 01:47:11.420433 4858 scope.go:117] "RemoveContainer" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" Feb 18 01:47:11 crc kubenswrapper[4858]: E0218 01:47:11.421478 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:47:20 crc kubenswrapper[4858]: E0218 01:47:20.425791 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:47:21 crc kubenswrapper[4858]: I0218 01:47:21.061366 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c77n8"] Feb 18 01:47:21 crc kubenswrapper[4858]: E0218 01:47:21.061940 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65cd5b4f-e1ce-401d-b2e7-9c622282c342" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 01:47:21 crc kubenswrapper[4858]: I0218 01:47:21.061971 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="65cd5b4f-e1ce-401d-b2e7-9c622282c342" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 01:47:21 crc kubenswrapper[4858]: E0218 01:47:21.062004 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a1fdd56-5406-42cf-bb12-46c3f9fbfe63" containerName="collect-profiles" Feb 18 01:47:21 crc kubenswrapper[4858]: I0218 01:47:21.062012 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a1fdd56-5406-42cf-bb12-46c3f9fbfe63" containerName="collect-profiles" Feb 18 01:47:21 crc kubenswrapper[4858]: I0218 01:47:21.062280 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a1fdd56-5406-42cf-bb12-46c3f9fbfe63" containerName="collect-profiles" Feb 18 01:47:21 crc kubenswrapper[4858]: I0218 01:47:21.062310 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="65cd5b4f-e1ce-401d-b2e7-9c622282c342" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 01:47:21 crc kubenswrapper[4858]: I0218 01:47:21.064293 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c77n8" Feb 18 01:47:21 crc kubenswrapper[4858]: I0218 01:47:21.076514 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c77n8"] Feb 18 01:47:21 crc kubenswrapper[4858]: I0218 01:47:21.171603 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d47bba2c-4966-441c-95f7-85e5dfef245d-utilities\") pod \"redhat-marketplace-c77n8\" (UID: \"d47bba2c-4966-441c-95f7-85e5dfef245d\") " pod="openshift-marketplace/redhat-marketplace-c77n8" Feb 18 01:47:21 crc kubenswrapper[4858]: I0218 01:47:21.171869 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d47bba2c-4966-441c-95f7-85e5dfef245d-catalog-content\") pod \"redhat-marketplace-c77n8\" (UID: \"d47bba2c-4966-441c-95f7-85e5dfef245d\") " pod="openshift-marketplace/redhat-marketplace-c77n8" Feb 18 01:47:21 crc kubenswrapper[4858]: I0218 01:47:21.171965 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9p97\" (UniqueName: \"kubernetes.io/projected/d47bba2c-4966-441c-95f7-85e5dfef245d-kube-api-access-d9p97\") pod \"redhat-marketplace-c77n8\" (UID: \"d47bba2c-4966-441c-95f7-85e5dfef245d\") " pod="openshift-marketplace/redhat-marketplace-c77n8" Feb 18 01:47:21 crc kubenswrapper[4858]: I0218 01:47:21.274114 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d47bba2c-4966-441c-95f7-85e5dfef245d-utilities\") pod \"redhat-marketplace-c77n8\" (UID: \"d47bba2c-4966-441c-95f7-85e5dfef245d\") " pod="openshift-marketplace/redhat-marketplace-c77n8" Feb 18 01:47:21 crc kubenswrapper[4858]: I0218 01:47:21.274169 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d47bba2c-4966-441c-95f7-85e5dfef245d-catalog-content\") pod \"redhat-marketplace-c77n8\" (UID: \"d47bba2c-4966-441c-95f7-85e5dfef245d\") " pod="openshift-marketplace/redhat-marketplace-c77n8" Feb 18 01:47:21 crc kubenswrapper[4858]: I0218 01:47:21.274263 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9p97\" (UniqueName: \"kubernetes.io/projected/d47bba2c-4966-441c-95f7-85e5dfef245d-kube-api-access-d9p97\") pod \"redhat-marketplace-c77n8\" (UID: \"d47bba2c-4966-441c-95f7-85e5dfef245d\") " pod="openshift-marketplace/redhat-marketplace-c77n8" Feb 18 01:47:21 crc kubenswrapper[4858]: I0218 01:47:21.274793 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d47bba2c-4966-441c-95f7-85e5dfef245d-utilities\") pod \"redhat-marketplace-c77n8\" (UID: \"d47bba2c-4966-441c-95f7-85e5dfef245d\") " pod="openshift-marketplace/redhat-marketplace-c77n8" Feb 18 01:47:21 crc kubenswrapper[4858]: I0218 01:47:21.274803 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d47bba2c-4966-441c-95f7-85e5dfef245d-catalog-content\") pod \"redhat-marketplace-c77n8\" (UID: \"d47bba2c-4966-441c-95f7-85e5dfef245d\") " pod="openshift-marketplace/redhat-marketplace-c77n8" Feb 18 01:47:21 crc kubenswrapper[4858]: I0218 01:47:21.293133 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9p97\" (UniqueName: \"kubernetes.io/projected/d47bba2c-4966-441c-95f7-85e5dfef245d-kube-api-access-d9p97\") pod \"redhat-marketplace-c77n8\" (UID: \"d47bba2c-4966-441c-95f7-85e5dfef245d\") " pod="openshift-marketplace/redhat-marketplace-c77n8" Feb 18 01:47:21 crc kubenswrapper[4858]: I0218 01:47:21.431159 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c77n8" Feb 18 01:47:21 crc kubenswrapper[4858]: W0218 01:47:21.912664 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd47bba2c_4966_441c_95f7_85e5dfef245d.slice/crio-fb9557ea90147d2ab591c6c6bb1afa8dbe6849ce17589053070b55cfed3bc4cc WatchSource:0}: Error finding container fb9557ea90147d2ab591c6c6bb1afa8dbe6849ce17589053070b55cfed3bc4cc: Status 404 returned error can't find the container with id fb9557ea90147d2ab591c6c6bb1afa8dbe6849ce17589053070b55cfed3bc4cc Feb 18 01:47:21 crc kubenswrapper[4858]: I0218 01:47:21.915602 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c77n8"] Feb 18 01:47:21 crc kubenswrapper[4858]: I0218 01:47:21.957776 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c77n8" event={"ID":"d47bba2c-4966-441c-95f7-85e5dfef245d","Type":"ContainerStarted","Data":"fb9557ea90147d2ab591c6c6bb1afa8dbe6849ce17589053070b55cfed3bc4cc"} Feb 18 01:47:22 crc kubenswrapper[4858]: E0218 01:47:22.422360 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:47:22 crc kubenswrapper[4858]: I0218 01:47:22.971009 4858 generic.go:334] "Generic (PLEG): container finished" podID="d47bba2c-4966-441c-95f7-85e5dfef245d" containerID="6c48528549bb3c9c356b5ac7afda5fa665517c93831f052b1e634a2b2844e04e" exitCode=0 Feb 18 01:47:22 crc kubenswrapper[4858]: I0218 01:47:22.971054 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c77n8" event={"ID":"d47bba2c-4966-441c-95f7-85e5dfef245d","Type":"ContainerDied","Data":"6c48528549bb3c9c356b5ac7afda5fa665517c93831f052b1e634a2b2844e04e"} Feb 18 01:47:23 crc kubenswrapper[4858]: I0218 01:47:23.419346 4858 scope.go:117] "RemoveContainer" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" Feb 18 01:47:23 crc kubenswrapper[4858]: E0218 01:47:23.419969 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:47:25 crc kubenswrapper[4858]: I0218 01:47:25.003468 4858 generic.go:334] "Generic (PLEG): container finished" podID="d47bba2c-4966-441c-95f7-85e5dfef245d" containerID="7aae8bea7a9f91fc7f669e003634cb1543c63f1625be57e0052a0124ac99596b" exitCode=0 Feb 18 01:47:25 crc kubenswrapper[4858]: I0218 01:47:25.003546 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c77n8" event={"ID":"d47bba2c-4966-441c-95f7-85e5dfef245d","Type":"ContainerDied","Data":"7aae8bea7a9f91fc7f669e003634cb1543c63f1625be57e0052a0124ac99596b"} Feb 18 01:47:27 crc kubenswrapper[4858]: I0218 01:47:27.048171 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c77n8" event={"ID":"d47bba2c-4966-441c-95f7-85e5dfef245d","Type":"ContainerStarted","Data":"9154e1dc05f9e59a01cff67e405b751df2d5c240af4fae7eb7ca08a396b9ca84"} Feb 18 01:47:27 crc kubenswrapper[4858]: I0218 01:47:27.093813 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c77n8" podStartSLOduration=3.665330268 podStartE2EDuration="6.093780317s" podCreationTimestamp="2026-02-18 01:47:21 +0000 UTC" firstStartedPulling="2026-02-18 01:47:22.974211451 +0000 UTC m=+4396.280048183" lastFinishedPulling="2026-02-18 01:47:25.40266147 +0000 UTC m=+4398.708498232" observedRunningTime="2026-02-18 01:47:27.067113846 +0000 UTC m=+4400.372950618" watchObservedRunningTime="2026-02-18 01:47:27.093780317 +0000 UTC m=+4400.399617089" Feb 18 01:47:31 crc kubenswrapper[4858]: I0218 01:47:31.438932 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c77n8" Feb 18 01:47:31 crc kubenswrapper[4858]: I0218 01:47:31.440019 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c77n8" Feb 18 01:47:31 crc kubenswrapper[4858]: I0218 01:47:31.483474 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c77n8" Feb 18 01:47:32 crc kubenswrapper[4858]: I0218 01:47:32.175806 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c77n8" Feb 18 01:47:32 crc kubenswrapper[4858]: I0218 01:47:32.247940 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c77n8"] Feb 18 01:47:34 crc kubenswrapper[4858]: I0218 01:47:34.126434 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c77n8" podUID="d47bba2c-4966-441c-95f7-85e5dfef245d" containerName="registry-server" containerID="cri-o://9154e1dc05f9e59a01cff67e405b751df2d5c240af4fae7eb7ca08a396b9ca84" gracePeriod=2 Feb 18 01:47:34 crc kubenswrapper[4858]: E0218 01:47:34.422526 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:47:34 crc kubenswrapper[4858]: I0218 01:47:34.759709 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c77n8" Feb 18 01:47:34 crc kubenswrapper[4858]: I0218 01:47:34.770229 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d47bba2c-4966-441c-95f7-85e5dfef245d-utilities\") pod \"d47bba2c-4966-441c-95f7-85e5dfef245d\" (UID: \"d47bba2c-4966-441c-95f7-85e5dfef245d\") " Feb 18 01:47:34 crc kubenswrapper[4858]: I0218 01:47:34.770327 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d47bba2c-4966-441c-95f7-85e5dfef245d-catalog-content\") pod \"d47bba2c-4966-441c-95f7-85e5dfef245d\" (UID: \"d47bba2c-4966-441c-95f7-85e5dfef245d\") " Feb 18 01:47:34 crc kubenswrapper[4858]: I0218 01:47:34.770367 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9p97\" (UniqueName: \"kubernetes.io/projected/d47bba2c-4966-441c-95f7-85e5dfef245d-kube-api-access-d9p97\") pod \"d47bba2c-4966-441c-95f7-85e5dfef245d\" (UID: \"d47bba2c-4966-441c-95f7-85e5dfef245d\") " Feb 18 01:47:34 crc kubenswrapper[4858]: I0218 01:47:34.771943 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d47bba2c-4966-441c-95f7-85e5dfef245d-utilities" (OuterVolumeSpecName: "utilities") pod "d47bba2c-4966-441c-95f7-85e5dfef245d" (UID: "d47bba2c-4966-441c-95f7-85e5dfef245d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:47:34 crc kubenswrapper[4858]: I0218 01:47:34.783786 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d47bba2c-4966-441c-95f7-85e5dfef245d-kube-api-access-d9p97" (OuterVolumeSpecName: "kube-api-access-d9p97") pod "d47bba2c-4966-441c-95f7-85e5dfef245d" (UID: "d47bba2c-4966-441c-95f7-85e5dfef245d"). InnerVolumeSpecName "kube-api-access-d9p97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:47:34 crc kubenswrapper[4858]: I0218 01:47:34.815939 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d47bba2c-4966-441c-95f7-85e5dfef245d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d47bba2c-4966-441c-95f7-85e5dfef245d" (UID: "d47bba2c-4966-441c-95f7-85e5dfef245d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:47:34 crc kubenswrapper[4858]: I0218 01:47:34.872403 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d47bba2c-4966-441c-95f7-85e5dfef245d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:47:34 crc kubenswrapper[4858]: I0218 01:47:34.872441 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d47bba2c-4966-441c-95f7-85e5dfef245d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:47:34 crc kubenswrapper[4858]: I0218 01:47:34.872452 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9p97\" (UniqueName: \"kubernetes.io/projected/d47bba2c-4966-441c-95f7-85e5dfef245d-kube-api-access-d9p97\") on node \"crc\" DevicePath \"\"" Feb 18 01:47:35 crc kubenswrapper[4858]: I0218 01:47:35.141313 4858 generic.go:334] "Generic (PLEG): container finished" podID="d47bba2c-4966-441c-95f7-85e5dfef245d" containerID="9154e1dc05f9e59a01cff67e405b751df2d5c240af4fae7eb7ca08a396b9ca84" exitCode=0 Feb 18 01:47:35 crc kubenswrapper[4858]: I0218 01:47:35.141474 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c77n8" event={"ID":"d47bba2c-4966-441c-95f7-85e5dfef245d","Type":"ContainerDied","Data":"9154e1dc05f9e59a01cff67e405b751df2d5c240af4fae7eb7ca08a396b9ca84"} Feb 18 01:47:35 crc kubenswrapper[4858]: I0218 01:47:35.141584 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c77n8" event={"ID":"d47bba2c-4966-441c-95f7-85e5dfef245d","Type":"ContainerDied","Data":"fb9557ea90147d2ab591c6c6bb1afa8dbe6849ce17589053070b55cfed3bc4cc"} Feb 18 01:47:35 crc kubenswrapper[4858]: I0218 01:47:35.141617 4858 scope.go:117] "RemoveContainer" containerID="9154e1dc05f9e59a01cff67e405b751df2d5c240af4fae7eb7ca08a396b9ca84" Feb 18 01:47:35 crc kubenswrapper[4858]: I0218 01:47:35.141854 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c77n8" Feb 18 01:47:35 crc kubenswrapper[4858]: I0218 01:47:35.175330 4858 scope.go:117] "RemoveContainer" containerID="7aae8bea7a9f91fc7f669e003634cb1543c63f1625be57e0052a0124ac99596b" Feb 18 01:47:35 crc kubenswrapper[4858]: I0218 01:47:35.200314 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c77n8"] Feb 18 01:47:35 crc kubenswrapper[4858]: I0218 01:47:35.216101 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c77n8"] Feb 18 01:47:35 crc kubenswrapper[4858]: I0218 01:47:35.222301 4858 scope.go:117] "RemoveContainer" containerID="6c48528549bb3c9c356b5ac7afda5fa665517c93831f052b1e634a2b2844e04e" Feb 18 01:47:35 crc kubenswrapper[4858]: I0218 01:47:35.279820 4858 scope.go:117] "RemoveContainer" containerID="9154e1dc05f9e59a01cff67e405b751df2d5c240af4fae7eb7ca08a396b9ca84" Feb 18 01:47:35 crc kubenswrapper[4858]: E0218 01:47:35.281373 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9154e1dc05f9e59a01cff67e405b751df2d5c240af4fae7eb7ca08a396b9ca84\": container with ID starting with 9154e1dc05f9e59a01cff67e405b751df2d5c240af4fae7eb7ca08a396b9ca84 not found: ID does not exist" containerID="9154e1dc05f9e59a01cff67e405b751df2d5c240af4fae7eb7ca08a396b9ca84" Feb 18 01:47:35 crc kubenswrapper[4858]: I0218 01:47:35.281522 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9154e1dc05f9e59a01cff67e405b751df2d5c240af4fae7eb7ca08a396b9ca84"} err="failed to get container status \"9154e1dc05f9e59a01cff67e405b751df2d5c240af4fae7eb7ca08a396b9ca84\": rpc error: code = NotFound desc = could not find container \"9154e1dc05f9e59a01cff67e405b751df2d5c240af4fae7eb7ca08a396b9ca84\": container with ID starting with 9154e1dc05f9e59a01cff67e405b751df2d5c240af4fae7eb7ca08a396b9ca84 not found: ID does not exist" Feb 18 01:47:35 crc kubenswrapper[4858]: I0218 01:47:35.281606 4858 scope.go:117] "RemoveContainer" containerID="7aae8bea7a9f91fc7f669e003634cb1543c63f1625be57e0052a0124ac99596b" Feb 18 01:47:35 crc kubenswrapper[4858]: E0218 01:47:35.283998 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aae8bea7a9f91fc7f669e003634cb1543c63f1625be57e0052a0124ac99596b\": container with ID starting with 7aae8bea7a9f91fc7f669e003634cb1543c63f1625be57e0052a0124ac99596b not found: ID does not exist" containerID="7aae8bea7a9f91fc7f669e003634cb1543c63f1625be57e0052a0124ac99596b" Feb 18 01:47:35 crc kubenswrapper[4858]: I0218 01:47:35.284056 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aae8bea7a9f91fc7f669e003634cb1543c63f1625be57e0052a0124ac99596b"} err="failed to get container status \"7aae8bea7a9f91fc7f669e003634cb1543c63f1625be57e0052a0124ac99596b\": rpc error: code = NotFound desc = could not find container \"7aae8bea7a9f91fc7f669e003634cb1543c63f1625be57e0052a0124ac99596b\": container with ID starting with 7aae8bea7a9f91fc7f669e003634cb1543c63f1625be57e0052a0124ac99596b not found: ID does not exist" Feb 18 01:47:35 crc kubenswrapper[4858]: I0218 01:47:35.284084 4858 scope.go:117] "RemoveContainer" containerID="6c48528549bb3c9c356b5ac7afda5fa665517c93831f052b1e634a2b2844e04e" Feb 18 01:47:35 crc kubenswrapper[4858]: E0218 01:47:35.284389 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c48528549bb3c9c356b5ac7afda5fa665517c93831f052b1e634a2b2844e04e\": container with ID starting with 6c48528549bb3c9c356b5ac7afda5fa665517c93831f052b1e634a2b2844e04e not found: ID does not exist" containerID="6c48528549bb3c9c356b5ac7afda5fa665517c93831f052b1e634a2b2844e04e" Feb 18 01:47:35 crc kubenswrapper[4858]: I0218 01:47:35.284426 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c48528549bb3c9c356b5ac7afda5fa665517c93831f052b1e634a2b2844e04e"} err="failed to get container status \"6c48528549bb3c9c356b5ac7afda5fa665517c93831f052b1e634a2b2844e04e\": rpc error: code = NotFound desc = could not find container \"6c48528549bb3c9c356b5ac7afda5fa665517c93831f052b1e634a2b2844e04e\": container with ID starting with 6c48528549bb3c9c356b5ac7afda5fa665517c93831f052b1e634a2b2844e04e not found: ID does not exist" Feb 18 01:47:35 crc kubenswrapper[4858]: I0218 01:47:35.420191 4858 scope.go:117] "RemoveContainer" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" Feb 18 01:47:35 crc kubenswrapper[4858]: E0218 01:47:35.420517 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:47:35 crc kubenswrapper[4858]: I0218 01:47:35.436223 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d47bba2c-4966-441c-95f7-85e5dfef245d" path="/var/lib/kubelet/pods/d47bba2c-4966-441c-95f7-85e5dfef245d/volumes" Feb 18 01:47:37 crc kubenswrapper[4858]: E0218 01:47:37.441856 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:47:47 crc kubenswrapper[4858]: I0218 01:47:47.436264 4858 scope.go:117] "RemoveContainer" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" Feb 18 01:47:47 crc kubenswrapper[4858]: E0218 01:47:47.438004 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:47:48 crc kubenswrapper[4858]: E0218 01:47:48.422286 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:47:50 crc kubenswrapper[4858]: E0218 01:47:50.422138 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:47:58 crc kubenswrapper[4858]: I0218 01:47:58.421069 4858 scope.go:117] "RemoveContainer" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" Feb 18 01:47:58 crc kubenswrapper[4858]: E0218 01:47:58.421952 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:47:59 crc kubenswrapper[4858]: E0218 01:47:59.423010 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:47:59 crc kubenswrapper[4858]: I0218 01:47:59.922639 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8rbxd"] Feb 18 01:47:59 crc kubenswrapper[4858]: E0218 01:47:59.923241 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d47bba2c-4966-441c-95f7-85e5dfef245d" containerName="registry-server" Feb 18 01:47:59 crc kubenswrapper[4858]: I0218 01:47:59.923273 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d47bba2c-4966-441c-95f7-85e5dfef245d" containerName="registry-server" Feb 18 01:47:59 crc kubenswrapper[4858]: E0218 01:47:59.923307 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d47bba2c-4966-441c-95f7-85e5dfef245d" containerName="extract-content" Feb 18 01:47:59 crc kubenswrapper[4858]: I0218 01:47:59.923319 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d47bba2c-4966-441c-95f7-85e5dfef245d" containerName="extract-content" Feb 18 01:47:59 crc kubenswrapper[4858]: E0218 01:47:59.923342 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d47bba2c-4966-441c-95f7-85e5dfef245d" containerName="extract-utilities" Feb 18 01:47:59 crc kubenswrapper[4858]: I0218 01:47:59.923353 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d47bba2c-4966-441c-95f7-85e5dfef245d" containerName="extract-utilities" Feb 18 01:47:59 crc kubenswrapper[4858]: I0218 01:47:59.923692 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d47bba2c-4966-441c-95f7-85e5dfef245d" containerName="registry-server" Feb 18 01:47:59 crc kubenswrapper[4858]: I0218 01:47:59.926130 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rbxd" Feb 18 01:47:59 crc kubenswrapper[4858]: I0218 01:47:59.970223 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8rbxd"] Feb 18 01:48:00 crc kubenswrapper[4858]: I0218 01:48:00.000921 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/354c887b-f8cf-4e56-b0a8-2364651da60b-catalog-content\") pod \"redhat-operators-8rbxd\" (UID: \"354c887b-f8cf-4e56-b0a8-2364651da60b\") " pod="openshift-marketplace/redhat-operators-8rbxd" Feb 18 01:48:00 crc kubenswrapper[4858]: I0218 01:48:00.001055 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/354c887b-f8cf-4e56-b0a8-2364651da60b-utilities\") pod \"redhat-operators-8rbxd\" (UID: \"354c887b-f8cf-4e56-b0a8-2364651da60b\") " pod="openshift-marketplace/redhat-operators-8rbxd" Feb 18 01:48:00 crc kubenswrapper[4858]: I0218 01:48:00.001106 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt7fk\" (UniqueName: \"kubernetes.io/projected/354c887b-f8cf-4e56-b0a8-2364651da60b-kube-api-access-zt7fk\") pod \"redhat-operators-8rbxd\" (UID: \"354c887b-f8cf-4e56-b0a8-2364651da60b\") " pod="openshift-marketplace/redhat-operators-8rbxd" Feb 18 01:48:00 crc kubenswrapper[4858]: I0218 01:48:00.102728 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/354c887b-f8cf-4e56-b0a8-2364651da60b-catalog-content\") pod \"redhat-operators-8rbxd\" (UID: \"354c887b-f8cf-4e56-b0a8-2364651da60b\") " pod="openshift-marketplace/redhat-operators-8rbxd" Feb 18 01:48:00 crc kubenswrapper[4858]: I0218 01:48:00.102799 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/354c887b-f8cf-4e56-b0a8-2364651da60b-utilities\") pod \"redhat-operators-8rbxd\" (UID: \"354c887b-f8cf-4e56-b0a8-2364651da60b\") " pod="openshift-marketplace/redhat-operators-8rbxd" Feb 18 01:48:00 crc kubenswrapper[4858]: I0218 01:48:00.102824 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt7fk\" (UniqueName: \"kubernetes.io/projected/354c887b-f8cf-4e56-b0a8-2364651da60b-kube-api-access-zt7fk\") pod \"redhat-operators-8rbxd\" (UID: \"354c887b-f8cf-4e56-b0a8-2364651da60b\") " pod="openshift-marketplace/redhat-operators-8rbxd" Feb 18 01:48:00 crc kubenswrapper[4858]: I0218 01:48:00.103939 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/354c887b-f8cf-4e56-b0a8-2364651da60b-utilities\") pod \"redhat-operators-8rbxd\" (UID: \"354c887b-f8cf-4e56-b0a8-2364651da60b\") " pod="openshift-marketplace/redhat-operators-8rbxd" Feb 18 01:48:00 crc kubenswrapper[4858]: I0218 01:48:00.103910 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/354c887b-f8cf-4e56-b0a8-2364651da60b-catalog-content\") pod \"redhat-operators-8rbxd\" (UID: \"354c887b-f8cf-4e56-b0a8-2364651da60b\") " pod="openshift-marketplace/redhat-operators-8rbxd" Feb 18 01:48:00 crc kubenswrapper[4858]: I0218 01:48:00.141605 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt7fk\" (UniqueName: \"kubernetes.io/projected/354c887b-f8cf-4e56-b0a8-2364651da60b-kube-api-access-zt7fk\") pod \"redhat-operators-8rbxd\" (UID: \"354c887b-f8cf-4e56-b0a8-2364651da60b\") " pod="openshift-marketplace/redhat-operators-8rbxd" Feb 18 01:48:00 crc kubenswrapper[4858]: I0218 01:48:00.286219 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rbxd" Feb 18 01:48:00 crc kubenswrapper[4858]: I0218 01:48:00.778268 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8rbxd"] Feb 18 01:48:01 crc kubenswrapper[4858]: I0218 01:48:01.462994 4858 generic.go:334] "Generic (PLEG): container finished" podID="354c887b-f8cf-4e56-b0a8-2364651da60b" containerID="02f6438c995ff693a277cf93558e535dc24f26b08a78e93c2ff1f703d7849da5" exitCode=0 Feb 18 01:48:01 crc kubenswrapper[4858]: I0218 01:48:01.463043 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rbxd" event={"ID":"354c887b-f8cf-4e56-b0a8-2364651da60b","Type":"ContainerDied","Data":"02f6438c995ff693a277cf93558e535dc24f26b08a78e93c2ff1f703d7849da5"} Feb 18 01:48:01 crc kubenswrapper[4858]: I0218 01:48:01.463094 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rbxd" event={"ID":"354c887b-f8cf-4e56-b0a8-2364651da60b","Type":"ContainerStarted","Data":"2c9f46f46dafa450663ce3378c49a3882a28895352c5bed945abed6d614a0079"} Feb 18 01:48:02 crc kubenswrapper[4858]: E0218 01:48:02.421039 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:48:03 crc kubenswrapper[4858]: I0218 01:48:03.495192 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rbxd" event={"ID":"354c887b-f8cf-4e56-b0a8-2364651da60b","Type":"ContainerStarted","Data":"8548d2184983722f2b13be35256f4d2d4bb5eb91483085e7124799cc749d7579"} Feb 18 01:48:05 crc kubenswrapper[4858]: I0218 01:48:05.529571 4858 generic.go:334] "Generic (PLEG): container finished" podID="354c887b-f8cf-4e56-b0a8-2364651da60b" containerID="8548d2184983722f2b13be35256f4d2d4bb5eb91483085e7124799cc749d7579" exitCode=0 Feb 18 01:48:05 crc kubenswrapper[4858]: I0218 01:48:05.529708 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rbxd" event={"ID":"354c887b-f8cf-4e56-b0a8-2364651da60b","Type":"ContainerDied","Data":"8548d2184983722f2b13be35256f4d2d4bb5eb91483085e7124799cc749d7579"} Feb 18 01:48:06 crc kubenswrapper[4858]: I0218 01:48:06.560087 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rbxd" event={"ID":"354c887b-f8cf-4e56-b0a8-2364651da60b","Type":"ContainerStarted","Data":"3f59076b47da6a9c3b73d8d8b6e34b839225a1df771868221c53834838857f4d"} Feb 18 01:48:06 crc kubenswrapper[4858]: I0218 01:48:06.593490 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8rbxd" podStartSLOduration=3.104715589 podStartE2EDuration="7.593466119s" podCreationTimestamp="2026-02-18 01:47:59 +0000 UTC" firstStartedPulling="2026-02-18 01:48:01.46479414 +0000 UTC m=+4434.770630872" lastFinishedPulling="2026-02-18 01:48:05.95354463 +0000 UTC m=+4439.259381402" observedRunningTime="2026-02-18 01:48:06.584419975 +0000 UTC m=+4439.890256707" watchObservedRunningTime="2026-02-18 01:48:06.593466119 +0000 UTC m=+4439.899302891" Feb 18 01:48:10 crc kubenswrapper[4858]: I0218 01:48:10.286548 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8rbxd" Feb 18 01:48:10 crc kubenswrapper[4858]: I0218 01:48:10.287036 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8rbxd" Feb 18 01:48:11 crc kubenswrapper[4858]: I0218 01:48:11.361754 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8rbxd" podUID="354c887b-f8cf-4e56-b0a8-2364651da60b" containerName="registry-server" probeResult="failure" output=< Feb 18 01:48:11 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Feb 18 01:48:11 crc kubenswrapper[4858]: > Feb 18 01:48:12 crc kubenswrapper[4858]: E0218 01:48:12.422589 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:48:13 crc kubenswrapper[4858]: I0218 01:48:13.420464 4858 scope.go:117] "RemoveContainer" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" Feb 18 01:48:13 crc kubenswrapper[4858]: E0218 01:48:13.420912 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:48:16 crc kubenswrapper[4858]: E0218 01:48:16.422787 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:48:20 crc kubenswrapper[4858]: I0218 01:48:20.363381 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8rbxd" Feb 18 01:48:20 crc kubenswrapper[4858]: I0218 01:48:20.433789 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8rbxd" Feb 18 01:48:20 crc kubenswrapper[4858]: I0218 01:48:20.609184 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8rbxd"] Feb 18 01:48:21 crc kubenswrapper[4858]: I0218 01:48:21.726184 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8rbxd" podUID="354c887b-f8cf-4e56-b0a8-2364651da60b" containerName="registry-server" containerID="cri-o://3f59076b47da6a9c3b73d8d8b6e34b839225a1df771868221c53834838857f4d" gracePeriod=2 Feb 18 01:48:22 crc kubenswrapper[4858]: I0218 01:48:22.275682 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rbxd" Feb 18 01:48:22 crc kubenswrapper[4858]: I0218 01:48:22.336725 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt7fk\" (UniqueName: \"kubernetes.io/projected/354c887b-f8cf-4e56-b0a8-2364651da60b-kube-api-access-zt7fk\") pod \"354c887b-f8cf-4e56-b0a8-2364651da60b\" (UID: \"354c887b-f8cf-4e56-b0a8-2364651da60b\") " Feb 18 01:48:22 crc kubenswrapper[4858]: I0218 01:48:22.336845 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/354c887b-f8cf-4e56-b0a8-2364651da60b-catalog-content\") pod \"354c887b-f8cf-4e56-b0a8-2364651da60b\" (UID: \"354c887b-f8cf-4e56-b0a8-2364651da60b\") " Feb 18 01:48:22 crc kubenswrapper[4858]: I0218 01:48:22.336976 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/354c887b-f8cf-4e56-b0a8-2364651da60b-utilities\") pod \"354c887b-f8cf-4e56-b0a8-2364651da60b\" (UID: \"354c887b-f8cf-4e56-b0a8-2364651da60b\") " Feb 18 01:48:22 crc kubenswrapper[4858]: I0218 01:48:22.338308 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/354c887b-f8cf-4e56-b0a8-2364651da60b-utilities" (OuterVolumeSpecName: "utilities") pod "354c887b-f8cf-4e56-b0a8-2364651da60b" (UID: "354c887b-f8cf-4e56-b0a8-2364651da60b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:48:22 crc kubenswrapper[4858]: I0218 01:48:22.346115 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/354c887b-f8cf-4e56-b0a8-2364651da60b-kube-api-access-zt7fk" (OuterVolumeSpecName: "kube-api-access-zt7fk") pod "354c887b-f8cf-4e56-b0a8-2364651da60b" (UID: "354c887b-f8cf-4e56-b0a8-2364651da60b"). InnerVolumeSpecName "kube-api-access-zt7fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:48:22 crc kubenswrapper[4858]: I0218 01:48:22.440921 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt7fk\" (UniqueName: \"kubernetes.io/projected/354c887b-f8cf-4e56-b0a8-2364651da60b-kube-api-access-zt7fk\") on node \"crc\" DevicePath \"\"" Feb 18 01:48:22 crc kubenswrapper[4858]: I0218 01:48:22.441235 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/354c887b-f8cf-4e56-b0a8-2364651da60b-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:48:22 crc kubenswrapper[4858]: I0218 01:48:22.494283 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/354c887b-f8cf-4e56-b0a8-2364651da60b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "354c887b-f8cf-4e56-b0a8-2364651da60b" (UID: "354c887b-f8cf-4e56-b0a8-2364651da60b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:48:22 crc kubenswrapper[4858]: I0218 01:48:22.543241 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/354c887b-f8cf-4e56-b0a8-2364651da60b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:48:22 crc kubenswrapper[4858]: I0218 01:48:22.737160 4858 generic.go:334] "Generic (PLEG): container finished" podID="354c887b-f8cf-4e56-b0a8-2364651da60b" containerID="3f59076b47da6a9c3b73d8d8b6e34b839225a1df771868221c53834838857f4d" exitCode=0 Feb 18 01:48:22 crc kubenswrapper[4858]: I0218 01:48:22.737202 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rbxd" event={"ID":"354c887b-f8cf-4e56-b0a8-2364651da60b","Type":"ContainerDied","Data":"3f59076b47da6a9c3b73d8d8b6e34b839225a1df771868221c53834838857f4d"} Feb 18 01:48:22 crc kubenswrapper[4858]: I0218 01:48:22.737230 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rbxd" event={"ID":"354c887b-f8cf-4e56-b0a8-2364651da60b","Type":"ContainerDied","Data":"2c9f46f46dafa450663ce3378c49a3882a28895352c5bed945abed6d614a0079"} Feb 18 01:48:22 crc kubenswrapper[4858]: I0218 01:48:22.737246 4858 scope.go:117] "RemoveContainer" containerID="3f59076b47da6a9c3b73d8d8b6e34b839225a1df771868221c53834838857f4d" Feb 18 01:48:22 crc kubenswrapper[4858]: I0218 01:48:22.737242 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rbxd" Feb 18 01:48:22 crc kubenswrapper[4858]: I0218 01:48:22.776564 4858 scope.go:117] "RemoveContainer" containerID="8548d2184983722f2b13be35256f4d2d4bb5eb91483085e7124799cc749d7579" Feb 18 01:48:22 crc kubenswrapper[4858]: I0218 01:48:22.800960 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8rbxd"] Feb 18 01:48:22 crc kubenswrapper[4858]: I0218 01:48:22.819991 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8rbxd"] Feb 18 01:48:22 crc kubenswrapper[4858]: I0218 01:48:22.826032 4858 scope.go:117] "RemoveContainer" containerID="02f6438c995ff693a277cf93558e535dc24f26b08a78e93c2ff1f703d7849da5" Feb 18 01:48:22 crc kubenswrapper[4858]: I0218 01:48:22.886337 4858 scope.go:117] "RemoveContainer" containerID="3f59076b47da6a9c3b73d8d8b6e34b839225a1df771868221c53834838857f4d" Feb 18 01:48:22 crc kubenswrapper[4858]: E0218 01:48:22.887137 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f59076b47da6a9c3b73d8d8b6e34b839225a1df771868221c53834838857f4d\": container with ID starting with 3f59076b47da6a9c3b73d8d8b6e34b839225a1df771868221c53834838857f4d not found: ID does not exist" containerID="3f59076b47da6a9c3b73d8d8b6e34b839225a1df771868221c53834838857f4d" Feb 18 01:48:22 crc kubenswrapper[4858]: I0218 01:48:22.887288 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f59076b47da6a9c3b73d8d8b6e34b839225a1df771868221c53834838857f4d"} err="failed to get container status \"3f59076b47da6a9c3b73d8d8b6e34b839225a1df771868221c53834838857f4d\": rpc error: code = NotFound desc = could not find container \"3f59076b47da6a9c3b73d8d8b6e34b839225a1df771868221c53834838857f4d\": container with ID starting with 3f59076b47da6a9c3b73d8d8b6e34b839225a1df771868221c53834838857f4d not found: ID does not exist" Feb 18 01:48:22 crc kubenswrapper[4858]: I0218 01:48:22.887378 4858 scope.go:117] "RemoveContainer" containerID="8548d2184983722f2b13be35256f4d2d4bb5eb91483085e7124799cc749d7579" Feb 18 01:48:22 crc kubenswrapper[4858]: E0218 01:48:22.887995 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8548d2184983722f2b13be35256f4d2d4bb5eb91483085e7124799cc749d7579\": container with ID starting with 8548d2184983722f2b13be35256f4d2d4bb5eb91483085e7124799cc749d7579 not found: ID does not exist" containerID="8548d2184983722f2b13be35256f4d2d4bb5eb91483085e7124799cc749d7579" Feb 18 01:48:22 crc kubenswrapper[4858]: I0218 01:48:22.888040 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8548d2184983722f2b13be35256f4d2d4bb5eb91483085e7124799cc749d7579"} err="failed to get container status \"8548d2184983722f2b13be35256f4d2d4bb5eb91483085e7124799cc749d7579\": rpc error: code = NotFound desc = could not find container \"8548d2184983722f2b13be35256f4d2d4bb5eb91483085e7124799cc749d7579\": container with ID starting with 8548d2184983722f2b13be35256f4d2d4bb5eb91483085e7124799cc749d7579 not found: ID does not exist" Feb 18 01:48:22 crc kubenswrapper[4858]: I0218 01:48:22.888069 4858 scope.go:117] "RemoveContainer" containerID="02f6438c995ff693a277cf93558e535dc24f26b08a78e93c2ff1f703d7849da5" Feb 18 01:48:22 crc kubenswrapper[4858]: E0218 01:48:22.888466 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02f6438c995ff693a277cf93558e535dc24f26b08a78e93c2ff1f703d7849da5\": container with ID starting with 02f6438c995ff693a277cf93558e535dc24f26b08a78e93c2ff1f703d7849da5 not found: ID does not exist" containerID="02f6438c995ff693a277cf93558e535dc24f26b08a78e93c2ff1f703d7849da5" Feb 18 01:48:22 crc kubenswrapper[4858]: I0218 01:48:22.888518 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02f6438c995ff693a277cf93558e535dc24f26b08a78e93c2ff1f703d7849da5"} err="failed to get container status \"02f6438c995ff693a277cf93558e535dc24f26b08a78e93c2ff1f703d7849da5\": rpc error: code = NotFound desc = could not find container \"02f6438c995ff693a277cf93558e535dc24f26b08a78e93c2ff1f703d7849da5\": container with ID starting with 02f6438c995ff693a277cf93558e535dc24f26b08a78e93c2ff1f703d7849da5 not found: ID does not exist" Feb 18 01:48:23 crc kubenswrapper[4858]: I0218 01:48:23.434211 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354c887b-f8cf-4e56-b0a8-2364651da60b" path="/var/lib/kubelet/pods/354c887b-f8cf-4e56-b0a8-2364651da60b/volumes" Feb 18 01:48:24 crc kubenswrapper[4858]: I0218 01:48:24.420183 4858 scope.go:117] "RemoveContainer" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" Feb 18 01:48:24 crc kubenswrapper[4858]: E0218 01:48:24.420821 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:48:24 crc kubenswrapper[4858]: E0218 01:48:24.422078 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:48:30 crc kubenswrapper[4858]: E0218 01:48:30.421832 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:48:38 crc kubenswrapper[4858]: I0218 01:48:38.419597 4858 scope.go:117] "RemoveContainer" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" Feb 18 01:48:38 crc kubenswrapper[4858]: E0218 01:48:38.420403 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:48:38 crc kubenswrapper[4858]: E0218 01:48:38.422100 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:48:45 crc kubenswrapper[4858]: I0218 01:48:45.422328 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 01:48:45 crc kubenswrapper[4858]: E0218 01:48:45.541215 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 01:48:45 crc kubenswrapper[4858]: E0218 01:48:45.541477 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 01:48:45 crc kubenswrapper[4858]: E0218 01:48:45.541879 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t22t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-h2mps_openstack(8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:48:45 crc kubenswrapper[4858]: E0218 01:48:45.543322 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:48:51 crc kubenswrapper[4858]: E0218 01:48:51.520819 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:48:51 crc kubenswrapper[4858]: E0218 01:48:51.521336 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:48:51 crc kubenswrapper[4858]: E0218 01:48:51.521456 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n75h65dh56ch8chbh597h687h696h57bhdchcbhb6h9h648hb6h686h666h57ch557h55ch68ch76h686h5f7hb7hc6h68hdh67fh8bhd7hbcq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6qb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(1b28954c-8d35-4f43-a44b-307a56f6fff5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:48:51 crc kubenswrapper[4858]: E0218 01:48:51.523009 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:48:52 crc kubenswrapper[4858]: I0218 01:48:52.419793 4858 scope.go:117] "RemoveContainer" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" Feb 18 01:48:52 crc kubenswrapper[4858]: E0218 01:48:52.420012 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:48:59 crc kubenswrapper[4858]: E0218 01:48:59.422722 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:49:02 crc kubenswrapper[4858]: E0218 01:49:02.422799 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:49:07 crc kubenswrapper[4858]: I0218 01:49:07.428309 4858 scope.go:117] "RemoveContainer" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" Feb 18 01:49:07 crc kubenswrapper[4858]: E0218 01:49:07.429012 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:49:13 crc kubenswrapper[4858]: E0218 01:49:13.422274 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:49:16 crc kubenswrapper[4858]: E0218 01:49:16.421591 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:49:20 crc kubenswrapper[4858]: I0218 01:49:20.420349 4858 scope.go:117] "RemoveContainer" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" Feb 18 01:49:20 crc kubenswrapper[4858]: E0218 01:49:20.421243 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:49:24 crc kubenswrapper[4858]: E0218 01:49:24.422327 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:49:31 crc kubenswrapper[4858]: E0218 01:49:31.425580 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:49:34 crc kubenswrapper[4858]: I0218 01:49:34.420856 4858 scope.go:117] "RemoveContainer" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" Feb 18 01:49:34 crc kubenswrapper[4858]: E0218 01:49:34.422748 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:49:36 crc kubenswrapper[4858]: E0218 01:49:36.421813 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:49:46 crc kubenswrapper[4858]: I0218 01:49:46.422748 4858 scope.go:117] "RemoveContainer" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" Feb 18 01:49:46 crc kubenswrapper[4858]: E0218 01:49:46.423546 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:49:46 crc kubenswrapper[4858]: E0218 01:49:46.423850 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:49:49 crc kubenswrapper[4858]: E0218 01:49:49.421376 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:49:57 crc kubenswrapper[4858]: I0218 01:49:57.427876 4858 scope.go:117] "RemoveContainer" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" Feb 18 01:49:57 crc kubenswrapper[4858]: E0218 01:49:57.428707 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:49:58 crc kubenswrapper[4858]: E0218 01:49:58.421933 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:50:01 crc kubenswrapper[4858]: E0218 01:50:01.424067 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:50:08 crc kubenswrapper[4858]: I0218 01:50:08.420347 4858 scope.go:117] "RemoveContainer" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" Feb 18 01:50:08 crc kubenswrapper[4858]: E0218 01:50:08.421615 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:50:12 crc kubenswrapper[4858]: E0218 01:50:12.422792 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:50:14 crc kubenswrapper[4858]: E0218 01:50:14.420958 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:50:23 crc kubenswrapper[4858]: I0218 01:50:23.038443 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6"] Feb 18 01:50:23 crc kubenswrapper[4858]: E0218 01:50:23.039641 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354c887b-f8cf-4e56-b0a8-2364651da60b" containerName="registry-server" Feb 18 01:50:23 crc kubenswrapper[4858]: I0218 01:50:23.039657 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="354c887b-f8cf-4e56-b0a8-2364651da60b" containerName="registry-server" Feb 18 01:50:23 crc kubenswrapper[4858]: E0218 01:50:23.039677 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354c887b-f8cf-4e56-b0a8-2364651da60b" containerName="extract-content" Feb 18 01:50:23 crc kubenswrapper[4858]: I0218 01:50:23.039684 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="354c887b-f8cf-4e56-b0a8-2364651da60b" containerName="extract-content" Feb 18 01:50:23 crc kubenswrapper[4858]: E0218 01:50:23.039709 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354c887b-f8cf-4e56-b0a8-2364651da60b" containerName="extract-utilities" Feb 18 01:50:23 crc kubenswrapper[4858]: I0218 01:50:23.039719 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="354c887b-f8cf-4e56-b0a8-2364651da60b" containerName="extract-utilities" Feb 18 01:50:23 crc kubenswrapper[4858]: I0218 01:50:23.039989 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="354c887b-f8cf-4e56-b0a8-2364651da60b" containerName="registry-server" Feb 18 01:50:23 crc kubenswrapper[4858]: I0218 01:50:23.041694 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6" Feb 18 01:50:23 crc kubenswrapper[4858]: I0218 01:50:23.044713 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 01:50:23 crc kubenswrapper[4858]: I0218 01:50:23.044863 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 01:50:23 crc kubenswrapper[4858]: I0218 01:50:23.050376 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ts27" Feb 18 01:50:23 crc kubenswrapper[4858]: I0218 01:50:23.050573 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 01:50:23 crc kubenswrapper[4858]: I0218 01:50:23.065535 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6"] Feb 18 01:50:23 crc kubenswrapper[4858]: I0218 01:50:23.194105 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0882588c-e25d-402e-ba41-76d7bec2ec65-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6\" (UID: \"0882588c-e25d-402e-ba41-76d7bec2ec65\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6" Feb 18 01:50:23 crc kubenswrapper[4858]: I0218 01:50:23.194336 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dz4p\" (UniqueName: \"kubernetes.io/projected/0882588c-e25d-402e-ba41-76d7bec2ec65-kube-api-access-2dz4p\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6\" (UID: \"0882588c-e25d-402e-ba41-76d7bec2ec65\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6" Feb 18 01:50:23 crc kubenswrapper[4858]: I0218 01:50:23.194381 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0882588c-e25d-402e-ba41-76d7bec2ec65-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6\" (UID: \"0882588c-e25d-402e-ba41-76d7bec2ec65\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6" Feb 18 01:50:23 crc kubenswrapper[4858]: I0218 01:50:23.298176 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dz4p\" (UniqueName: \"kubernetes.io/projected/0882588c-e25d-402e-ba41-76d7bec2ec65-kube-api-access-2dz4p\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6\" (UID: \"0882588c-e25d-402e-ba41-76d7bec2ec65\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6" Feb 18 01:50:23 crc kubenswrapper[4858]: I0218 01:50:23.298282 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0882588c-e25d-402e-ba41-76d7bec2ec65-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6\" (UID: \"0882588c-e25d-402e-ba41-76d7bec2ec65\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6" Feb 18 01:50:23 crc kubenswrapper[4858]: I0218 01:50:23.298325 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0882588c-e25d-402e-ba41-76d7bec2ec65-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6\" (UID: \"0882588c-e25d-402e-ba41-76d7bec2ec65\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6" Feb 18 01:50:23 crc kubenswrapper[4858]: I0218 01:50:23.313888 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0882588c-e25d-402e-ba41-76d7bec2ec65-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6\" (UID: \"0882588c-e25d-402e-ba41-76d7bec2ec65\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6" Feb 18 01:50:23 crc kubenswrapper[4858]: I0218 01:50:23.314276 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0882588c-e25d-402e-ba41-76d7bec2ec65-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6\" (UID: \"0882588c-e25d-402e-ba41-76d7bec2ec65\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6" Feb 18 01:50:23 crc kubenswrapper[4858]: I0218 01:50:23.325430 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dz4p\" (UniqueName: \"kubernetes.io/projected/0882588c-e25d-402e-ba41-76d7bec2ec65-kube-api-access-2dz4p\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6\" (UID: \"0882588c-e25d-402e-ba41-76d7bec2ec65\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6" Feb 18 01:50:23 crc kubenswrapper[4858]: I0218 01:50:23.363817 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6" Feb 18 01:50:23 crc kubenswrapper[4858]: I0218 01:50:23.420255 4858 scope.go:117] "RemoveContainer" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" Feb 18 01:50:23 crc kubenswrapper[4858]: E0218 01:50:23.420745 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:50:23 crc kubenswrapper[4858]: E0218 01:50:23.421701 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:50:23 crc kubenswrapper[4858]: W0218 01:50:23.962151 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0882588c_e25d_402e_ba41_76d7bec2ec65.slice/crio-9eddf0436322588be75f721c9248199051ecea58de01f56fba6b51d5c8e6f420 WatchSource:0}: Error finding container 9eddf0436322588be75f721c9248199051ecea58de01f56fba6b51d5c8e6f420: Status 404 returned error can't find the container with id 9eddf0436322588be75f721c9248199051ecea58de01f56fba6b51d5c8e6f420 Feb 18 01:50:23 crc kubenswrapper[4858]: I0218 01:50:23.963888 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6"] Feb 18 01:50:24 crc kubenswrapper[4858]: I0218 01:50:24.099059 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6" event={"ID":"0882588c-e25d-402e-ba41-76d7bec2ec65","Type":"ContainerStarted","Data":"9eddf0436322588be75f721c9248199051ecea58de01f56fba6b51d5c8e6f420"} Feb 18 01:50:25 crc kubenswrapper[4858]: I0218 01:50:25.109341 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6" event={"ID":"0882588c-e25d-402e-ba41-76d7bec2ec65","Type":"ContainerStarted","Data":"dd75b7e37f7cd55758194fbffb33fbe96f69f88eff1cccc6b4abe49db582ba65"} Feb 18 01:50:25 crc kubenswrapper[4858]: I0218 01:50:25.136469 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6" podStartSLOduration=1.647637967 podStartE2EDuration="2.136453221s" podCreationTimestamp="2026-02-18 01:50:23 +0000 UTC" firstStartedPulling="2026-02-18 01:50:23.965281804 +0000 UTC m=+4577.271118546" lastFinishedPulling="2026-02-18 01:50:24.454097068 +0000 UTC m=+4577.759933800" observedRunningTime="2026-02-18 01:50:25.129350702 +0000 UTC m=+4578.435187434" watchObservedRunningTime="2026-02-18 01:50:25.136453221 +0000 UTC m=+4578.442289943" Feb 18 01:50:29 crc kubenswrapper[4858]: E0218 01:50:29.422253 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:50:34 crc kubenswrapper[4858]: E0218 01:50:34.421048 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:50:36 crc kubenswrapper[4858]: I0218 01:50:36.419837 4858 scope.go:117] "RemoveContainer" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" Feb 18 01:50:36 crc kubenswrapper[4858]: E0218 01:50:36.420793 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:50:40 crc kubenswrapper[4858]: E0218 01:50:40.423195 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:50:45 crc kubenswrapper[4858]: E0218 01:50:45.422696 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:50:47 crc kubenswrapper[4858]: I0218 01:50:47.886074 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fft8n"] Feb 18 01:50:47 crc kubenswrapper[4858]: I0218 01:50:47.891925 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fft8n" Feb 18 01:50:47 crc kubenswrapper[4858]: I0218 01:50:47.904700 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fft8n"] Feb 18 01:50:47 crc kubenswrapper[4858]: I0218 01:50:47.954802 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9jdp\" (UniqueName: \"kubernetes.io/projected/78ff8481-a18d-49e6-a1f3-9e0fd910724e-kube-api-access-n9jdp\") pod \"certified-operators-fft8n\" (UID: \"78ff8481-a18d-49e6-a1f3-9e0fd910724e\") " pod="openshift-marketplace/certified-operators-fft8n" Feb 18 01:50:47 crc kubenswrapper[4858]: I0218 01:50:47.954982 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78ff8481-a18d-49e6-a1f3-9e0fd910724e-catalog-content\") pod \"certified-operators-fft8n\" (UID: \"78ff8481-a18d-49e6-a1f3-9e0fd910724e\") " pod="openshift-marketplace/certified-operators-fft8n" Feb 18 01:50:47 crc kubenswrapper[4858]: I0218 01:50:47.955043 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78ff8481-a18d-49e6-a1f3-9e0fd910724e-utilities\") pod \"certified-operators-fft8n\" (UID: \"78ff8481-a18d-49e6-a1f3-9e0fd910724e\") " pod="openshift-marketplace/certified-operators-fft8n" Feb 18 01:50:48 crc kubenswrapper[4858]: I0218 01:50:48.056661 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78ff8481-a18d-49e6-a1f3-9e0fd910724e-catalog-content\") pod \"certified-operators-fft8n\" (UID: \"78ff8481-a18d-49e6-a1f3-9e0fd910724e\") " pod="openshift-marketplace/certified-operators-fft8n" Feb 18 01:50:48 crc kubenswrapper[4858]: I0218 01:50:48.056902 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78ff8481-a18d-49e6-a1f3-9e0fd910724e-utilities\") pod \"certified-operators-fft8n\" (UID: \"78ff8481-a18d-49e6-a1f3-9e0fd910724e\") " pod="openshift-marketplace/certified-operators-fft8n" Feb 18 01:50:48 crc kubenswrapper[4858]: I0218 01:50:48.057171 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9jdp\" (UniqueName: \"kubernetes.io/projected/78ff8481-a18d-49e6-a1f3-9e0fd910724e-kube-api-access-n9jdp\") pod \"certified-operators-fft8n\" (UID: \"78ff8481-a18d-49e6-a1f3-9e0fd910724e\") " pod="openshift-marketplace/certified-operators-fft8n" Feb 18 01:50:48 crc kubenswrapper[4858]: I0218 01:50:48.057479 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78ff8481-a18d-49e6-a1f3-9e0fd910724e-catalog-content\") pod \"certified-operators-fft8n\" (UID: \"78ff8481-a18d-49e6-a1f3-9e0fd910724e\") " pod="openshift-marketplace/certified-operators-fft8n" Feb 18 01:50:48 crc kubenswrapper[4858]: I0218 01:50:48.057971 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78ff8481-a18d-49e6-a1f3-9e0fd910724e-utilities\") pod \"certified-operators-fft8n\" (UID: \"78ff8481-a18d-49e6-a1f3-9e0fd910724e\") " pod="openshift-marketplace/certified-operators-fft8n" Feb 18 01:50:48 crc kubenswrapper[4858]: I0218 01:50:48.080565 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9jdp\" (UniqueName: \"kubernetes.io/projected/78ff8481-a18d-49e6-a1f3-9e0fd910724e-kube-api-access-n9jdp\") pod \"certified-operators-fft8n\" (UID: \"78ff8481-a18d-49e6-a1f3-9e0fd910724e\") " pod="openshift-marketplace/certified-operators-fft8n" Feb 18 01:50:48 crc kubenswrapper[4858]: I0218 01:50:48.231653 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fft8n" Feb 18 01:50:48 crc kubenswrapper[4858]: I0218 01:50:48.419237 4858 scope.go:117] "RemoveContainer" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" Feb 18 01:50:48 crc kubenswrapper[4858]: E0218 01:50:48.419541 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:50:48 crc kubenswrapper[4858]: I0218 01:50:48.826676 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fft8n"] Feb 18 01:50:49 crc kubenswrapper[4858]: I0218 01:50:49.399747 4858 generic.go:334] "Generic (PLEG): container finished" podID="78ff8481-a18d-49e6-a1f3-9e0fd910724e" containerID="9e1859e45352a9e58eb5ecb4b4035fbf5150d309efc9fb0c007eedd9fae6d5ce" exitCode=0 Feb 18 01:50:49 crc kubenswrapper[4858]: I0218 01:50:49.399816 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fft8n" event={"ID":"78ff8481-a18d-49e6-a1f3-9e0fd910724e","Type":"ContainerDied","Data":"9e1859e45352a9e58eb5ecb4b4035fbf5150d309efc9fb0c007eedd9fae6d5ce"} Feb 18 01:50:49 crc kubenswrapper[4858]: I0218 01:50:49.400050 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fft8n" event={"ID":"78ff8481-a18d-49e6-a1f3-9e0fd910724e","Type":"ContainerStarted","Data":"ab683bb0619dfc5ed09637b56e7078639edbf16bb7131a96cbe161259641172e"} Feb 18 01:50:50 crc kubenswrapper[4858]: I0218 01:50:50.421929 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fft8n" event={"ID":"78ff8481-a18d-49e6-a1f3-9e0fd910724e","Type":"ContainerStarted","Data":"79163fe1eeab857167c639db20682c6a1ed8f646278d06dc32d60a8c1e3ebe6d"} Feb 18 01:50:51 crc kubenswrapper[4858]: I0218 01:50:51.434732 4858 generic.go:334] "Generic (PLEG): container finished" podID="78ff8481-a18d-49e6-a1f3-9e0fd910724e" containerID="79163fe1eeab857167c639db20682c6a1ed8f646278d06dc32d60a8c1e3ebe6d" exitCode=0 Feb 18 01:50:51 crc kubenswrapper[4858]: I0218 01:50:51.437575 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fft8n" event={"ID":"78ff8481-a18d-49e6-a1f3-9e0fd910724e","Type":"ContainerDied","Data":"79163fe1eeab857167c639db20682c6a1ed8f646278d06dc32d60a8c1e3ebe6d"} Feb 18 01:50:52 crc kubenswrapper[4858]: I0218 01:50:52.446919 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fft8n" event={"ID":"78ff8481-a18d-49e6-a1f3-9e0fd910724e","Type":"ContainerStarted","Data":"492e59a33213b4c3a9a47a5eeeed09c007e4dd624f6f8cc7376f984ef16e46a7"} Feb 18 01:50:52 crc kubenswrapper[4858]: I0218 01:50:52.481167 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fft8n" podStartSLOduration=3.07451891 podStartE2EDuration="5.481150443s" podCreationTimestamp="2026-02-18 01:50:47 +0000 UTC" firstStartedPulling="2026-02-18 01:50:49.40157317 +0000 UTC m=+4602.707409902" lastFinishedPulling="2026-02-18 01:50:51.808204683 +0000 UTC m=+4605.114041435" observedRunningTime="2026-02-18 01:50:52.475997262 +0000 UTC m=+4605.781833994" watchObservedRunningTime="2026-02-18 01:50:52.481150443 +0000 UTC m=+4605.786987175" Feb 18 01:50:55 crc kubenswrapper[4858]: E0218 01:50:55.422209 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:50:58 crc kubenswrapper[4858]: I0218 01:50:58.231814 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fft8n" Feb 18 01:50:58 crc kubenswrapper[4858]: I0218 01:50:58.232097 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fft8n" Feb 18 01:50:58 crc kubenswrapper[4858]: I0218 01:50:58.315288 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fft8n" Feb 18 01:50:58 crc kubenswrapper[4858]: I0218 01:50:58.589276 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fft8n" Feb 18 01:50:59 crc kubenswrapper[4858]: E0218 01:50:59.422598 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:50:59 crc kubenswrapper[4858]: I0218 01:50:59.664889 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fft8n"] Feb 18 01:51:00 crc kubenswrapper[4858]: I0218 01:51:00.531878 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fft8n" podUID="78ff8481-a18d-49e6-a1f3-9e0fd910724e" containerName="registry-server" containerID="cri-o://492e59a33213b4c3a9a47a5eeeed09c007e4dd624f6f8cc7376f984ef16e46a7" gracePeriod=2 Feb 18 01:51:01 crc kubenswrapper[4858]: I0218 01:51:01.595149 4858 generic.go:334] "Generic (PLEG): container finished" podID="78ff8481-a18d-49e6-a1f3-9e0fd910724e" containerID="492e59a33213b4c3a9a47a5eeeed09c007e4dd624f6f8cc7376f984ef16e46a7" exitCode=0 Feb 18 01:51:01 crc kubenswrapper[4858]: I0218 01:51:01.595193 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fft8n" event={"ID":"78ff8481-a18d-49e6-a1f3-9e0fd910724e","Type":"ContainerDied","Data":"492e59a33213b4c3a9a47a5eeeed09c007e4dd624f6f8cc7376f984ef16e46a7"} Feb 18 01:51:01 crc kubenswrapper[4858]: I0218 01:51:01.670892 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fft8n" Feb 18 01:51:01 crc kubenswrapper[4858]: I0218 01:51:01.775919 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78ff8481-a18d-49e6-a1f3-9e0fd910724e-utilities\") pod \"78ff8481-a18d-49e6-a1f3-9e0fd910724e\" (UID: \"78ff8481-a18d-49e6-a1f3-9e0fd910724e\") " Feb 18 01:51:01 crc kubenswrapper[4858]: I0218 01:51:01.776002 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9jdp\" (UniqueName: \"kubernetes.io/projected/78ff8481-a18d-49e6-a1f3-9e0fd910724e-kube-api-access-n9jdp\") pod \"78ff8481-a18d-49e6-a1f3-9e0fd910724e\" (UID: \"78ff8481-a18d-49e6-a1f3-9e0fd910724e\") " Feb 18 01:51:01 crc kubenswrapper[4858]: I0218 01:51:01.776083 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78ff8481-a18d-49e6-a1f3-9e0fd910724e-catalog-content\") pod \"78ff8481-a18d-49e6-a1f3-9e0fd910724e\" (UID: \"78ff8481-a18d-49e6-a1f3-9e0fd910724e\") " Feb 18 01:51:01 crc kubenswrapper[4858]: I0218 01:51:01.777760 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78ff8481-a18d-49e6-a1f3-9e0fd910724e-utilities" (OuterVolumeSpecName: "utilities") pod "78ff8481-a18d-49e6-a1f3-9e0fd910724e" (UID: "78ff8481-a18d-49e6-a1f3-9e0fd910724e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:51:01 crc kubenswrapper[4858]: I0218 01:51:01.786841 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78ff8481-a18d-49e6-a1f3-9e0fd910724e-kube-api-access-n9jdp" (OuterVolumeSpecName: "kube-api-access-n9jdp") pod "78ff8481-a18d-49e6-a1f3-9e0fd910724e" (UID: "78ff8481-a18d-49e6-a1f3-9e0fd910724e"). InnerVolumeSpecName "kube-api-access-n9jdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:51:01 crc kubenswrapper[4858]: I0218 01:51:01.825684 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78ff8481-a18d-49e6-a1f3-9e0fd910724e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78ff8481-a18d-49e6-a1f3-9e0fd910724e" (UID: "78ff8481-a18d-49e6-a1f3-9e0fd910724e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:51:01 crc kubenswrapper[4858]: I0218 01:51:01.878636 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78ff8481-a18d-49e6-a1f3-9e0fd910724e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:51:01 crc kubenswrapper[4858]: I0218 01:51:01.878670 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78ff8481-a18d-49e6-a1f3-9e0fd910724e-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:51:01 crc kubenswrapper[4858]: I0218 01:51:01.878683 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9jdp\" (UniqueName: \"kubernetes.io/projected/78ff8481-a18d-49e6-a1f3-9e0fd910724e-kube-api-access-n9jdp\") on node \"crc\" DevicePath \"\"" Feb 18 01:51:02 crc kubenswrapper[4858]: I0218 01:51:02.419565 4858 scope.go:117] "RemoveContainer" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" Feb 18 01:51:02 crc kubenswrapper[4858]: I0218 01:51:02.610102 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fft8n" event={"ID":"78ff8481-a18d-49e6-a1f3-9e0fd910724e","Type":"ContainerDied","Data":"ab683bb0619dfc5ed09637b56e7078639edbf16bb7131a96cbe161259641172e"} Feb 18 01:51:02 crc kubenswrapper[4858]: I0218 01:51:02.610159 4858 scope.go:117] "RemoveContainer" containerID="492e59a33213b4c3a9a47a5eeeed09c007e4dd624f6f8cc7376f984ef16e46a7" Feb 18 01:51:02 crc kubenswrapper[4858]: I0218 01:51:02.610239 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fft8n" Feb 18 01:51:02 crc kubenswrapper[4858]: I0218 01:51:02.662664 4858 scope.go:117] "RemoveContainer" containerID="79163fe1eeab857167c639db20682c6a1ed8f646278d06dc32d60a8c1e3ebe6d" Feb 18 01:51:02 crc kubenswrapper[4858]: I0218 01:51:02.676014 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fft8n"] Feb 18 01:51:02 crc kubenswrapper[4858]: I0218 01:51:02.687774 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fft8n"] Feb 18 01:51:02 crc kubenswrapper[4858]: I0218 01:51:02.701025 4858 scope.go:117] "RemoveContainer" containerID="9e1859e45352a9e58eb5ecb4b4035fbf5150d309efc9fb0c007eedd9fae6d5ce" Feb 18 01:51:03 crc kubenswrapper[4858]: I0218 01:51:03.443446 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78ff8481-a18d-49e6-a1f3-9e0fd910724e" path="/var/lib/kubelet/pods/78ff8481-a18d-49e6-a1f3-9e0fd910724e/volumes" Feb 18 01:51:03 crc kubenswrapper[4858]: I0218 01:51:03.625434 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerStarted","Data":"2f9ea37d20553408f9f1f674761fdb4ff0044163f8659b6ad84a3117b5664a3c"} Feb 18 01:51:07 crc kubenswrapper[4858]: E0218 01:51:07.455255 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:51:11 crc kubenswrapper[4858]: E0218 01:51:11.423132 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:51:21 crc kubenswrapper[4858]: E0218 01:51:21.422260 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:51:24 crc kubenswrapper[4858]: E0218 01:51:24.422750 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:51:33 crc kubenswrapper[4858]: E0218 01:51:33.439671 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:51:39 crc kubenswrapper[4858]: E0218 01:51:39.423077 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:51:45 crc kubenswrapper[4858]: E0218 01:51:45.422826 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:51:53 crc kubenswrapper[4858]: E0218 01:51:53.420993 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:51:56 crc kubenswrapper[4858]: E0218 01:51:56.423290 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:52:05 crc kubenswrapper[4858]: E0218 01:52:05.421879 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:52:07 crc kubenswrapper[4858]: E0218 01:52:07.440758 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:52:18 crc kubenswrapper[4858]: E0218 01:52:18.420807 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:52:19 crc kubenswrapper[4858]: E0218 01:52:19.422629 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:52:30 crc kubenswrapper[4858]: E0218 01:52:30.424086 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:52:32 crc kubenswrapper[4858]: E0218 01:52:32.420845 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:52:41 crc kubenswrapper[4858]: E0218 01:52:41.423040 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:52:45 crc kubenswrapper[4858]: E0218 01:52:45.431021 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:52:55 crc kubenswrapper[4858]: E0218 01:52:55.424731 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:52:57 crc kubenswrapper[4858]: E0218 01:52:57.429690 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:53:02 crc kubenswrapper[4858]: I0218 01:53:02.760454 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="a845f908-18e9-47e2-bc4f-01308c8a69b3" containerName="galera" probeResult="failure" output="command timed out" Feb 18 01:53:06 crc kubenswrapper[4858]: E0218 01:53:06.422025 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:53:09 crc kubenswrapper[4858]: E0218 01:53:09.421975 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:53:18 crc kubenswrapper[4858]: E0218 01:53:18.423010 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:53:22 crc kubenswrapper[4858]: E0218 01:53:22.422330 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:53:25 crc kubenswrapper[4858]: I0218 01:53:25.268434 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:53:25 crc kubenswrapper[4858]: I0218 01:53:25.268875 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:53:28 crc kubenswrapper[4858]: I0218 01:53:28.610996 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z2xwz"] Feb 18 01:53:28 crc kubenswrapper[4858]: E0218 01:53:28.612087 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78ff8481-a18d-49e6-a1f3-9e0fd910724e" containerName="extract-utilities" Feb 18 01:53:28 crc kubenswrapper[4858]: I0218 01:53:28.612103 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="78ff8481-a18d-49e6-a1f3-9e0fd910724e" containerName="extract-utilities" Feb 18 01:53:28 crc kubenswrapper[4858]: E0218 01:53:28.612139 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78ff8481-a18d-49e6-a1f3-9e0fd910724e" containerName="extract-content" Feb 18 01:53:28 crc kubenswrapper[4858]: I0218 01:53:28.612148 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="78ff8481-a18d-49e6-a1f3-9e0fd910724e" containerName="extract-content" Feb 18 01:53:28 crc kubenswrapper[4858]: E0218 01:53:28.612164 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78ff8481-a18d-49e6-a1f3-9e0fd910724e" containerName="registry-server" Feb 18 01:53:28 crc kubenswrapper[4858]: I0218 01:53:28.612174 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="78ff8481-a18d-49e6-a1f3-9e0fd910724e" containerName="registry-server" Feb 18 01:53:28 crc kubenswrapper[4858]: I0218 01:53:28.612428 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="78ff8481-a18d-49e6-a1f3-9e0fd910724e" containerName="registry-server" Feb 18 01:53:28 crc kubenswrapper[4858]: I0218 01:53:28.614885 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z2xwz" Feb 18 01:53:28 crc kubenswrapper[4858]: I0218 01:53:28.623868 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z2xwz"] Feb 18 01:53:28 crc kubenswrapper[4858]: I0218 01:53:28.769007 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c65b0616-ca8a-47a9-8cd0-2527a88c4779-catalog-content\") pod \"community-operators-z2xwz\" (UID: \"c65b0616-ca8a-47a9-8cd0-2527a88c4779\") " pod="openshift-marketplace/community-operators-z2xwz" Feb 18 01:53:28 crc kubenswrapper[4858]: I0218 01:53:28.769082 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c65b0616-ca8a-47a9-8cd0-2527a88c4779-utilities\") pod \"community-operators-z2xwz\" (UID: \"c65b0616-ca8a-47a9-8cd0-2527a88c4779\") " pod="openshift-marketplace/community-operators-z2xwz" Feb 18 01:53:28 crc kubenswrapper[4858]: I0218 01:53:28.769208 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhnmj\" (UniqueName: \"kubernetes.io/projected/c65b0616-ca8a-47a9-8cd0-2527a88c4779-kube-api-access-rhnmj\") pod \"community-operators-z2xwz\" (UID: \"c65b0616-ca8a-47a9-8cd0-2527a88c4779\") " pod="openshift-marketplace/community-operators-z2xwz" Feb 18 01:53:28 crc kubenswrapper[4858]: I0218 01:53:28.870881 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhnmj\" (UniqueName: \"kubernetes.io/projected/c65b0616-ca8a-47a9-8cd0-2527a88c4779-kube-api-access-rhnmj\") pod \"community-operators-z2xwz\" (UID: \"c65b0616-ca8a-47a9-8cd0-2527a88c4779\") " pod="openshift-marketplace/community-operators-z2xwz" Feb 18 01:53:28 crc kubenswrapper[4858]: I0218 01:53:28.870997 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c65b0616-ca8a-47a9-8cd0-2527a88c4779-catalog-content\") pod \"community-operators-z2xwz\" (UID: \"c65b0616-ca8a-47a9-8cd0-2527a88c4779\") " pod="openshift-marketplace/community-operators-z2xwz" Feb 18 01:53:28 crc kubenswrapper[4858]: I0218 01:53:28.871048 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c65b0616-ca8a-47a9-8cd0-2527a88c4779-utilities\") pod \"community-operators-z2xwz\" (UID: \"c65b0616-ca8a-47a9-8cd0-2527a88c4779\") " pod="openshift-marketplace/community-operators-z2xwz" Feb 18 01:53:28 crc kubenswrapper[4858]: I0218 01:53:28.871807 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c65b0616-ca8a-47a9-8cd0-2527a88c4779-catalog-content\") pod \"community-operators-z2xwz\" (UID: \"c65b0616-ca8a-47a9-8cd0-2527a88c4779\") " pod="openshift-marketplace/community-operators-z2xwz" Feb 18 01:53:28 crc kubenswrapper[4858]: I0218 01:53:28.871819 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c65b0616-ca8a-47a9-8cd0-2527a88c4779-utilities\") pod \"community-operators-z2xwz\" (UID: \"c65b0616-ca8a-47a9-8cd0-2527a88c4779\") " pod="openshift-marketplace/community-operators-z2xwz" Feb 18 01:53:28 crc kubenswrapper[4858]: I0218 01:53:28.890016 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhnmj\" (UniqueName: \"kubernetes.io/projected/c65b0616-ca8a-47a9-8cd0-2527a88c4779-kube-api-access-rhnmj\") pod \"community-operators-z2xwz\" (UID: \"c65b0616-ca8a-47a9-8cd0-2527a88c4779\") " pod="openshift-marketplace/community-operators-z2xwz" Feb 18 01:53:28 crc kubenswrapper[4858]: I0218 01:53:28.969963 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z2xwz" Feb 18 01:53:29 crc kubenswrapper[4858]: I0218 01:53:29.487675 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z2xwz"] Feb 18 01:53:30 crc kubenswrapper[4858]: I0218 01:53:30.341143 4858 generic.go:334] "Generic (PLEG): container finished" podID="c65b0616-ca8a-47a9-8cd0-2527a88c4779" containerID="23f5e42fa21dd673f221f492dba2897a66e925236ec4ac445cf4655444fa16e7" exitCode=0 Feb 18 01:53:30 crc kubenswrapper[4858]: I0218 01:53:30.341357 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2xwz" event={"ID":"c65b0616-ca8a-47a9-8cd0-2527a88c4779","Type":"ContainerDied","Data":"23f5e42fa21dd673f221f492dba2897a66e925236ec4ac445cf4655444fa16e7"} Feb 18 01:53:30 crc kubenswrapper[4858]: I0218 01:53:30.341454 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2xwz" event={"ID":"c65b0616-ca8a-47a9-8cd0-2527a88c4779","Type":"ContainerStarted","Data":"8fcd53b2a833cbb7e67f9a87b991aece2c6dd789d96add8f9060ff2a3103f555"} Feb 18 01:53:33 crc kubenswrapper[4858]: E0218 01:53:33.421583 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:53:35 crc kubenswrapper[4858]: I0218 01:53:35.414240 4858 generic.go:334] "Generic (PLEG): container finished" podID="c65b0616-ca8a-47a9-8cd0-2527a88c4779" containerID="5d085a997cd4e7bcf8414a4fbc047f6735d7b4095a9f49405def9e36b38bc47a" exitCode=0 Feb 18 01:53:35 crc kubenswrapper[4858]: I0218 01:53:35.414914 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2xwz" event={"ID":"c65b0616-ca8a-47a9-8cd0-2527a88c4779","Type":"ContainerDied","Data":"5d085a997cd4e7bcf8414a4fbc047f6735d7b4095a9f49405def9e36b38bc47a"} Feb 18 01:53:36 crc kubenswrapper[4858]: E0218 01:53:36.429383 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:53:36 crc kubenswrapper[4858]: I0218 01:53:36.442873 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2xwz" event={"ID":"c65b0616-ca8a-47a9-8cd0-2527a88c4779","Type":"ContainerStarted","Data":"0d053d054e075be4db4f261c9a3c5612b359955a64da9e1946eb144492cda232"} Feb 18 01:53:36 crc kubenswrapper[4858]: I0218 01:53:36.478966 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z2xwz" podStartSLOduration=2.9011243220000003 podStartE2EDuration="8.478938406s" podCreationTimestamp="2026-02-18 01:53:28 +0000 UTC" firstStartedPulling="2026-02-18 01:53:30.343431539 +0000 UTC m=+4763.649268271" lastFinishedPulling="2026-02-18 01:53:35.921245623 +0000 UTC m=+4769.227082355" observedRunningTime="2026-02-18 01:53:36.466845799 +0000 UTC m=+4769.772682531" watchObservedRunningTime="2026-02-18 01:53:36.478938406 +0000 UTC m=+4769.784775178" Feb 18 01:53:38 crc kubenswrapper[4858]: I0218 01:53:38.970465 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z2xwz" Feb 18 01:53:38 crc kubenswrapper[4858]: I0218 01:53:38.971014 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z2xwz" Feb 18 01:53:39 crc kubenswrapper[4858]: I0218 01:53:39.057319 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z2xwz" Feb 18 01:53:47 crc kubenswrapper[4858]: E0218 01:53:47.438728 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:53:49 crc kubenswrapper[4858]: I0218 01:53:49.023180 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z2xwz" Feb 18 01:53:49 crc kubenswrapper[4858]: I0218 01:53:49.087588 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z2xwz"] Feb 18 01:53:49 crc kubenswrapper[4858]: I0218 01:53:49.147768 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2j9qm"] Feb 18 01:53:49 crc kubenswrapper[4858]: I0218 01:53:49.148004 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2j9qm" podUID="8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5" containerName="registry-server" containerID="cri-o://2c8ecedac3f251631d4fd2d57aec2a9c2b49f3b7f8c46aa83a53922400e0cf20" gracePeriod=2 Feb 18 01:53:49 crc kubenswrapper[4858]: I0218 01:53:49.602915 4858 generic.go:334] "Generic (PLEG): container finished" podID="8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5" containerID="2c8ecedac3f251631d4fd2d57aec2a9c2b49f3b7f8c46aa83a53922400e0cf20" exitCode=0 Feb 18 01:53:49 crc kubenswrapper[4858]: I0218 01:53:49.603091 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j9qm" event={"ID":"8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5","Type":"ContainerDied","Data":"2c8ecedac3f251631d4fd2d57aec2a9c2b49f3b7f8c46aa83a53922400e0cf20"} Feb 18 01:53:49 crc kubenswrapper[4858]: I0218 01:53:49.603203 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j9qm" event={"ID":"8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5","Type":"ContainerDied","Data":"a124170683ffed7e1cd8aa040b590f01909c8b8255ade5b616178cf650beea57"} Feb 18 01:53:49 crc kubenswrapper[4858]: I0218 01:53:49.603219 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a124170683ffed7e1cd8aa040b590f01909c8b8255ade5b616178cf650beea57" Feb 18 01:53:49 crc kubenswrapper[4858]: I0218 01:53:49.670753 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2j9qm" Feb 18 01:53:49 crc kubenswrapper[4858]: I0218 01:53:49.778061 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5-utilities\") pod \"8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5\" (UID: \"8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5\") " Feb 18 01:53:49 crc kubenswrapper[4858]: I0218 01:53:49.778186 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq89p\" (UniqueName: \"kubernetes.io/projected/8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5-kube-api-access-hq89p\") pod \"8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5\" (UID: \"8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5\") " Feb 18 01:53:49 crc kubenswrapper[4858]: I0218 01:53:49.778231 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5-catalog-content\") pod \"8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5\" (UID: \"8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5\") " Feb 18 01:53:49 crc kubenswrapper[4858]: I0218 01:53:49.779270 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5-utilities" (OuterVolumeSpecName: "utilities") pod "8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5" (UID: "8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:53:49 crc kubenswrapper[4858]: I0218 01:53:49.838243 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5" (UID: "8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:53:49 crc kubenswrapper[4858]: I0218 01:53:49.881110 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:53:49 crc kubenswrapper[4858]: I0218 01:53:49.881138 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:53:50 crc kubenswrapper[4858]: I0218 01:53:50.361791 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5-kube-api-access-hq89p" (OuterVolumeSpecName: "kube-api-access-hq89p") pod "8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5" (UID: "8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5"). InnerVolumeSpecName "kube-api-access-hq89p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:53:50 crc kubenswrapper[4858]: I0218 01:53:50.392892 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq89p\" (UniqueName: \"kubernetes.io/projected/8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5-kube-api-access-hq89p\") on node \"crc\" DevicePath \"\"" Feb 18 01:53:50 crc kubenswrapper[4858]: I0218 01:53:50.611309 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2j9qm" Feb 18 01:53:50 crc kubenswrapper[4858]: I0218 01:53:50.643118 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2j9qm"] Feb 18 01:53:50 crc kubenswrapper[4858]: I0218 01:53:50.651432 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2j9qm"] Feb 18 01:53:51 crc kubenswrapper[4858]: I0218 01:53:51.422676 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 01:53:51 crc kubenswrapper[4858]: I0218 01:53:51.448026 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5" path="/var/lib/kubelet/pods/8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5/volumes" Feb 18 01:53:51 crc kubenswrapper[4858]: E0218 01:53:51.554483 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 01:53:51 crc kubenswrapper[4858]: E0218 01:53:51.554656 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 01:53:51 crc kubenswrapper[4858]: E0218 01:53:51.554858 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t22t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-h2mps_openstack(8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:53:51 crc kubenswrapper[4858]: E0218 01:53:51.556047 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:53:55 crc kubenswrapper[4858]: I0218 01:53:55.266046 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:53:55 crc kubenswrapper[4858]: I0218 01:53:55.266725 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:54:02 crc kubenswrapper[4858]: E0218 01:54:02.547212 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:54:02 crc kubenswrapper[4858]: E0218 01:54:02.547722 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:54:02 crc kubenswrapper[4858]: E0218 01:54:02.547865 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n75h65dh56ch8chbh597h687h696h57bhdchcbhb6h9h648hb6h686h666h57ch557h55ch68ch76h686h5f7hb7hc6h68hdh67fh8bhd7hbcq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6qb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(1b28954c-8d35-4f43-a44b-307a56f6fff5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:54:02 crc kubenswrapper[4858]: E0218 01:54:02.549050 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:54:05 crc kubenswrapper[4858]: E0218 01:54:05.423691 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:54:17 crc kubenswrapper[4858]: E0218 01:54:17.433640 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:54:19 crc kubenswrapper[4858]: E0218 01:54:19.422398 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:54:25 crc kubenswrapper[4858]: I0218 01:54:25.266011 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:54:25 crc kubenswrapper[4858]: I0218 01:54:25.266656 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:54:25 crc kubenswrapper[4858]: I0218 01:54:25.266716 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 01:54:25 crc kubenswrapper[4858]: I0218 01:54:25.267680 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2f9ea37d20553408f9f1f674761fdb4ff0044163f8659b6ad84a3117b5664a3c"} pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 01:54:25 crc kubenswrapper[4858]: I0218 01:54:25.267740 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" containerID="cri-o://2f9ea37d20553408f9f1f674761fdb4ff0044163f8659b6ad84a3117b5664a3c" gracePeriod=600 Feb 18 01:54:26 crc kubenswrapper[4858]: I0218 01:54:26.100693 4858 generic.go:334] "Generic (PLEG): container finished" podID="7172df49-6116-4968-a2b5-a1afb116568b" containerID="2f9ea37d20553408f9f1f674761fdb4ff0044163f8659b6ad84a3117b5664a3c" exitCode=0 Feb 18 01:54:26 crc kubenswrapper[4858]: I0218 01:54:26.100786 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerDied","Data":"2f9ea37d20553408f9f1f674761fdb4ff0044163f8659b6ad84a3117b5664a3c"} Feb 18 01:54:26 crc kubenswrapper[4858]: I0218 01:54:26.101368 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerStarted","Data":"6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4"} Feb 18 01:54:26 crc kubenswrapper[4858]: I0218 01:54:26.101402 4858 scope.go:117] "RemoveContainer" containerID="deeea1aab83f9023546a5be39327c6fc64b522e2a68cde53e290ab5b38175a49" Feb 18 01:54:31 crc kubenswrapper[4858]: E0218 01:54:31.422009 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:54:31 crc kubenswrapper[4858]: E0218 01:54:31.422244 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:54:33 crc kubenswrapper[4858]: I0218 01:54:33.443892 4858 scope.go:117] "RemoveContainer" containerID="38880aba737d55ce78d51a1c62217da8357048608d3bf59f71fdc7c442d2fbf3" Feb 18 01:54:33 crc kubenswrapper[4858]: I0218 01:54:33.508822 4858 scope.go:117] "RemoveContainer" containerID="66289234d4405f807096c71eee79876a5db8505ffe34f303b1c72d53229e2d13" Feb 18 01:54:33 crc kubenswrapper[4858]: I0218 01:54:33.544845 4858 scope.go:117] "RemoveContainer" containerID="2c8ecedac3f251631d4fd2d57aec2a9c2b49f3b7f8c46aa83a53922400e0cf20" Feb 18 01:54:42 crc kubenswrapper[4858]: E0218 01:54:42.422008 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:54:44 crc kubenswrapper[4858]: E0218 01:54:44.421658 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:54:55 crc kubenswrapper[4858]: E0218 01:54:55.424997 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:54:56 crc kubenswrapper[4858]: E0218 01:54:56.423646 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:55:08 crc kubenswrapper[4858]: E0218 01:55:08.422531 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:55:10 crc kubenswrapper[4858]: E0218 01:55:10.422454 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:55:19 crc kubenswrapper[4858]: E0218 01:55:19.421366 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:55:22 crc kubenswrapper[4858]: E0218 01:55:22.421702 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:55:33 crc kubenswrapper[4858]: E0218 01:55:33.422018 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:55:34 crc kubenswrapper[4858]: E0218 01:55:34.422487 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:55:44 crc kubenswrapper[4858]: E0218 01:55:44.424828 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:55:45 crc kubenswrapper[4858]: E0218 01:55:45.423022 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:55:56 crc kubenswrapper[4858]: E0218 01:55:56.423117 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:55:56 crc kubenswrapper[4858]: E0218 01:55:56.423412 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:56:08 crc kubenswrapper[4858]: E0218 01:56:08.421729 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:56:09 crc kubenswrapper[4858]: E0218 01:56:09.422587 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:56:22 crc kubenswrapper[4858]: E0218 01:56:22.423397 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:56:23 crc kubenswrapper[4858]: E0218 01:56:23.422748 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:56:25 crc kubenswrapper[4858]: I0218 01:56:25.265962 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:56:25 crc kubenswrapper[4858]: I0218 01:56:25.266295 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:56:34 crc kubenswrapper[4858]: E0218 01:56:34.423301 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:56:34 crc kubenswrapper[4858]: E0218 01:56:34.425288 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:56:45 crc kubenswrapper[4858]: I0218 01:56:45.752567 4858 generic.go:334] "Generic (PLEG): container finished" podID="0882588c-e25d-402e-ba41-76d7bec2ec65" containerID="dd75b7e37f7cd55758194fbffb33fbe96f69f88eff1cccc6b4abe49db582ba65" exitCode=2 Feb 18 01:56:45 crc kubenswrapper[4858]: I0218 01:56:45.752734 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6" event={"ID":"0882588c-e25d-402e-ba41-76d7bec2ec65","Type":"ContainerDied","Data":"dd75b7e37f7cd55758194fbffb33fbe96f69f88eff1cccc6b4abe49db582ba65"} Feb 18 01:56:47 crc kubenswrapper[4858]: I0218 01:56:47.437314 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6" Feb 18 01:56:47 crc kubenswrapper[4858]: E0218 01:56:47.439424 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:56:47 crc kubenswrapper[4858]: I0218 01:56:47.587917 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0882588c-e25d-402e-ba41-76d7bec2ec65-inventory\") pod \"0882588c-e25d-402e-ba41-76d7bec2ec65\" (UID: \"0882588c-e25d-402e-ba41-76d7bec2ec65\") " Feb 18 01:56:47 crc kubenswrapper[4858]: I0218 01:56:47.587996 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dz4p\" (UniqueName: \"kubernetes.io/projected/0882588c-e25d-402e-ba41-76d7bec2ec65-kube-api-access-2dz4p\") pod \"0882588c-e25d-402e-ba41-76d7bec2ec65\" (UID: \"0882588c-e25d-402e-ba41-76d7bec2ec65\") " Feb 18 01:56:47 crc kubenswrapper[4858]: I0218 01:56:47.588101 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0882588c-e25d-402e-ba41-76d7bec2ec65-ssh-key-openstack-edpm-ipam\") pod \"0882588c-e25d-402e-ba41-76d7bec2ec65\" (UID: \"0882588c-e25d-402e-ba41-76d7bec2ec65\") " Feb 18 01:56:47 crc kubenswrapper[4858]: I0218 01:56:47.613287 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0882588c-e25d-402e-ba41-76d7bec2ec65-kube-api-access-2dz4p" (OuterVolumeSpecName: "kube-api-access-2dz4p") pod "0882588c-e25d-402e-ba41-76d7bec2ec65" (UID: "0882588c-e25d-402e-ba41-76d7bec2ec65"). InnerVolumeSpecName "kube-api-access-2dz4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:56:47 crc kubenswrapper[4858]: I0218 01:56:47.618860 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0882588c-e25d-402e-ba41-76d7bec2ec65-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0882588c-e25d-402e-ba41-76d7bec2ec65" (UID: "0882588c-e25d-402e-ba41-76d7bec2ec65"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:56:47 crc kubenswrapper[4858]: I0218 01:56:47.621064 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0882588c-e25d-402e-ba41-76d7bec2ec65-inventory" (OuterVolumeSpecName: "inventory") pod "0882588c-e25d-402e-ba41-76d7bec2ec65" (UID: "0882588c-e25d-402e-ba41-76d7bec2ec65"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:56:47 crc kubenswrapper[4858]: I0218 01:56:47.690175 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0882588c-e25d-402e-ba41-76d7bec2ec65-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 01:56:47 crc kubenswrapper[4858]: I0218 01:56:47.690209 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dz4p\" (UniqueName: \"kubernetes.io/projected/0882588c-e25d-402e-ba41-76d7bec2ec65-kube-api-access-2dz4p\") on node \"crc\" DevicePath \"\"" Feb 18 01:56:47 crc kubenswrapper[4858]: I0218 01:56:47.690220 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0882588c-e25d-402e-ba41-76d7bec2ec65-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 01:56:47 crc kubenswrapper[4858]: I0218 01:56:47.782046 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6" event={"ID":"0882588c-e25d-402e-ba41-76d7bec2ec65","Type":"ContainerDied","Data":"9eddf0436322588be75f721c9248199051ecea58de01f56fba6b51d5c8e6f420"} Feb 18 01:56:47 crc kubenswrapper[4858]: I0218 01:56:47.782086 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9eddf0436322588be75f721c9248199051ecea58de01f56fba6b51d5c8e6f420" Feb 18 01:56:47 crc kubenswrapper[4858]: I0218 01:56:47.782135 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6" Feb 18 01:56:49 crc kubenswrapper[4858]: E0218 01:56:49.423411 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:56:55 crc kubenswrapper[4858]: I0218 01:56:55.265097 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:56:55 crc kubenswrapper[4858]: I0218 01:56:55.265760 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:57:01 crc kubenswrapper[4858]: E0218 01:57:01.422004 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:57:02 crc kubenswrapper[4858]: E0218 01:57:02.421778 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:57:14 crc kubenswrapper[4858]: E0218 01:57:14.422472 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:57:14 crc kubenswrapper[4858]: E0218 01:57:14.422520 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:57:25 crc kubenswrapper[4858]: I0218 01:57:25.265235 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:57:25 crc kubenswrapper[4858]: I0218 01:57:25.265916 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:57:25 crc kubenswrapper[4858]: I0218 01:57:25.266058 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 01:57:25 crc kubenswrapper[4858]: I0218 01:57:25.267088 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4"} pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 01:57:25 crc kubenswrapper[4858]: I0218 01:57:25.267178 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" containerID="cri-o://6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" gracePeriod=600 Feb 18 01:57:25 crc kubenswrapper[4858]: E0218 01:57:25.394654 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:57:26 crc kubenswrapper[4858]: I0218 01:57:26.331065 4858 generic.go:334] "Generic (PLEG): container finished" podID="7172df49-6116-4968-a2b5-a1afb116568b" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" exitCode=0 Feb 18 01:57:26 crc kubenswrapper[4858]: I0218 01:57:26.331128 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerDied","Data":"6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4"} Feb 18 01:57:26 crc kubenswrapper[4858]: I0218 01:57:26.331175 4858 scope.go:117] "RemoveContainer" containerID="2f9ea37d20553408f9f1f674761fdb4ff0044163f8659b6ad84a3117b5664a3c" Feb 18 01:57:26 crc kubenswrapper[4858]: I0218 01:57:26.331978 4858 scope.go:117] "RemoveContainer" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" Feb 18 01:57:26 crc kubenswrapper[4858]: E0218 01:57:26.332313 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:57:26 crc kubenswrapper[4858]: E0218 01:57:26.420968 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:57:27 crc kubenswrapper[4858]: E0218 01:57:27.436484 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:57:37 crc kubenswrapper[4858]: I0218 01:57:37.432535 4858 scope.go:117] "RemoveContainer" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" Feb 18 01:57:37 crc kubenswrapper[4858]: E0218 01:57:37.433651 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:57:37 crc kubenswrapper[4858]: E0218 01:57:37.435431 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:57:41 crc kubenswrapper[4858]: E0218 01:57:41.423899 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:57:48 crc kubenswrapper[4858]: E0218 01:57:48.423225 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:57:49 crc kubenswrapper[4858]: I0218 01:57:49.431416 4858 scope.go:117] "RemoveContainer" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" Feb 18 01:57:49 crc kubenswrapper[4858]: E0218 01:57:49.431959 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:57:54 crc kubenswrapper[4858]: I0218 01:57:54.193374 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ngw6p/must-gather-w2lzv"] Feb 18 01:57:54 crc kubenswrapper[4858]: E0218 01:57:54.194387 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0882588c-e25d-402e-ba41-76d7bec2ec65" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 01:57:54 crc kubenswrapper[4858]: I0218 01:57:54.194408 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0882588c-e25d-402e-ba41-76d7bec2ec65" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 01:57:54 crc kubenswrapper[4858]: E0218 01:57:54.194428 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5" containerName="registry-server" Feb 18 01:57:54 crc kubenswrapper[4858]: I0218 01:57:54.194435 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5" containerName="registry-server" Feb 18 01:57:54 crc kubenswrapper[4858]: E0218 01:57:54.194459 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5" containerName="extract-utilities" Feb 18 01:57:54 crc kubenswrapper[4858]: I0218 01:57:54.194466 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5" containerName="extract-utilities" Feb 18 01:57:54 crc kubenswrapper[4858]: E0218 01:57:54.194520 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5" containerName="extract-content" Feb 18 01:57:54 crc kubenswrapper[4858]: I0218 01:57:54.194531 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5" containerName="extract-content" Feb 18 01:57:54 crc kubenswrapper[4858]: I0218 01:57:54.194748 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0882588c-e25d-402e-ba41-76d7bec2ec65" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 01:57:54 crc kubenswrapper[4858]: I0218 01:57:54.194758 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e1dcb47-f5f6-4f9a-841c-2faa7fd0acb5" containerName="registry-server" Feb 18 01:57:54 crc kubenswrapper[4858]: I0218 01:57:54.195998 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngw6p/must-gather-w2lzv" Feb 18 01:57:54 crc kubenswrapper[4858]: I0218 01:57:54.198168 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ngw6p"/"openshift-service-ca.crt" Feb 18 01:57:54 crc kubenswrapper[4858]: I0218 01:57:54.198258 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ngw6p"/"default-dockercfg-tjcjr" Feb 18 01:57:54 crc kubenswrapper[4858]: I0218 01:57:54.198892 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ngw6p"/"kube-root-ca.crt" Feb 18 01:57:54 crc kubenswrapper[4858]: I0218 01:57:54.203442 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ngw6p/must-gather-w2lzv"] Feb 18 01:57:54 crc kubenswrapper[4858]: I0218 01:57:54.280765 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2b8m\" (UniqueName: \"kubernetes.io/projected/f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3-kube-api-access-c2b8m\") pod \"must-gather-w2lzv\" (UID: \"f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3\") " pod="openshift-must-gather-ngw6p/must-gather-w2lzv" Feb 18 01:57:54 crc kubenswrapper[4858]: I0218 01:57:54.280823 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3-must-gather-output\") pod \"must-gather-w2lzv\" (UID: \"f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3\") " pod="openshift-must-gather-ngw6p/must-gather-w2lzv" Feb 18 01:57:54 crc kubenswrapper[4858]: I0218 01:57:54.382950 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2b8m\" (UniqueName: \"kubernetes.io/projected/f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3-kube-api-access-c2b8m\") pod \"must-gather-w2lzv\" (UID: \"f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3\") " pod="openshift-must-gather-ngw6p/must-gather-w2lzv" Feb 18 01:57:54 crc kubenswrapper[4858]: I0218 01:57:54.383061 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3-must-gather-output\") pod \"must-gather-w2lzv\" (UID: \"f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3\") " pod="openshift-must-gather-ngw6p/must-gather-w2lzv" Feb 18 01:57:54 crc kubenswrapper[4858]: I0218 01:57:54.383855 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3-must-gather-output\") pod \"must-gather-w2lzv\" (UID: \"f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3\") " pod="openshift-must-gather-ngw6p/must-gather-w2lzv" Feb 18 01:57:54 crc kubenswrapper[4858]: I0218 01:57:54.421270 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2b8m\" (UniqueName: \"kubernetes.io/projected/f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3-kube-api-access-c2b8m\") pod \"must-gather-w2lzv\" (UID: \"f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3\") " pod="openshift-must-gather-ngw6p/must-gather-w2lzv" Feb 18 01:57:54 crc kubenswrapper[4858]: I0218 01:57:54.528076 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngw6p/must-gather-w2lzv" Feb 18 01:57:54 crc kubenswrapper[4858]: I0218 01:57:54.999193 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ngw6p/must-gather-w2lzv"] Feb 18 01:57:55 crc kubenswrapper[4858]: W0218 01:57:55.002701 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf097f3ff_e18c_4f92_a4f1_3e6cf8e548f3.slice/crio-f4b1596c6a8e5f8de91e0ba22855f036cc0980e805c4ef9f69858839593c6409 WatchSource:0}: Error finding container f4b1596c6a8e5f8de91e0ba22855f036cc0980e805c4ef9f69858839593c6409: Status 404 returned error can't find the container with id f4b1596c6a8e5f8de91e0ba22855f036cc0980e805c4ef9f69858839593c6409 Feb 18 01:57:55 crc kubenswrapper[4858]: I0218 01:57:55.695582 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ngw6p/must-gather-w2lzv" event={"ID":"f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3","Type":"ContainerStarted","Data":"f4b1596c6a8e5f8de91e0ba22855f036cc0980e805c4ef9f69858839593c6409"} Feb 18 01:57:56 crc kubenswrapper[4858]: E0218 01:57:56.421683 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:57:59 crc kubenswrapper[4858]: E0218 01:57:59.421168 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:58:01 crc kubenswrapper[4858]: I0218 01:58:01.419720 4858 scope.go:117] "RemoveContainer" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" Feb 18 01:58:01 crc kubenswrapper[4858]: E0218 01:58:01.420187 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:58:03 crc kubenswrapper[4858]: I0218 01:58:03.800464 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ngw6p/must-gather-w2lzv" event={"ID":"f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3","Type":"ContainerStarted","Data":"aa409c3bfdd51672b3e3c976a0b811ae2f708cc711e3de84e714fc1903da5671"} Feb 18 01:58:03 crc kubenswrapper[4858]: I0218 01:58:03.804664 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ngw6p/must-gather-w2lzv" event={"ID":"f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3","Type":"ContainerStarted","Data":"5d5310471c2a091ce4142f9c46e17925bfa7a15dc653ce04acd974470e5fc9c6"} Feb 18 01:58:03 crc kubenswrapper[4858]: I0218 01:58:03.833622 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ngw6p/must-gather-w2lzv" podStartSLOduration=2.005685812 podStartE2EDuration="9.833605085s" podCreationTimestamp="2026-02-18 01:57:54 +0000 UTC" firstStartedPulling="2026-02-18 01:57:55.004713186 +0000 UTC m=+5028.310549918" lastFinishedPulling="2026-02-18 01:58:02.832632459 +0000 UTC m=+5036.138469191" observedRunningTime="2026-02-18 01:58:03.827239642 +0000 UTC m=+5037.133076424" watchObservedRunningTime="2026-02-18 01:58:03.833605085 +0000 UTC m=+5037.139441817" Feb 18 01:58:05 crc kubenswrapper[4858]: I0218 01:58:05.654932 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b658q"] Feb 18 01:58:05 crc kubenswrapper[4858]: I0218 01:58:05.658123 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b658q" Feb 18 01:58:05 crc kubenswrapper[4858]: I0218 01:58:05.674442 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b658q"] Feb 18 01:58:05 crc kubenswrapper[4858]: I0218 01:58:05.749159 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c75de86-14d9-4028-9842-a00da5264fe9-catalog-content\") pod \"redhat-marketplace-b658q\" (UID: \"7c75de86-14d9-4028-9842-a00da5264fe9\") " pod="openshift-marketplace/redhat-marketplace-b658q" Feb 18 01:58:05 crc kubenswrapper[4858]: I0218 01:58:05.749254 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c75de86-14d9-4028-9842-a00da5264fe9-utilities\") pod \"redhat-marketplace-b658q\" (UID: \"7c75de86-14d9-4028-9842-a00da5264fe9\") " pod="openshift-marketplace/redhat-marketplace-b658q" Feb 18 01:58:05 crc kubenswrapper[4858]: I0218 01:58:05.749347 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4jwq\" (UniqueName: \"kubernetes.io/projected/7c75de86-14d9-4028-9842-a00da5264fe9-kube-api-access-r4jwq\") pod \"redhat-marketplace-b658q\" (UID: \"7c75de86-14d9-4028-9842-a00da5264fe9\") " pod="openshift-marketplace/redhat-marketplace-b658q" Feb 18 01:58:05 crc kubenswrapper[4858]: I0218 01:58:05.851395 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c75de86-14d9-4028-9842-a00da5264fe9-catalog-content\") pod \"redhat-marketplace-b658q\" (UID: \"7c75de86-14d9-4028-9842-a00da5264fe9\") " pod="openshift-marketplace/redhat-marketplace-b658q" Feb 18 01:58:05 crc kubenswrapper[4858]: I0218 01:58:05.851733 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c75de86-14d9-4028-9842-a00da5264fe9-utilities\") pod \"redhat-marketplace-b658q\" (UID: \"7c75de86-14d9-4028-9842-a00da5264fe9\") " pod="openshift-marketplace/redhat-marketplace-b658q" Feb 18 01:58:05 crc kubenswrapper[4858]: I0218 01:58:05.851825 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4jwq\" (UniqueName: \"kubernetes.io/projected/7c75de86-14d9-4028-9842-a00da5264fe9-kube-api-access-r4jwq\") pod \"redhat-marketplace-b658q\" (UID: \"7c75de86-14d9-4028-9842-a00da5264fe9\") " pod="openshift-marketplace/redhat-marketplace-b658q" Feb 18 01:58:05 crc kubenswrapper[4858]: I0218 01:58:05.851940 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c75de86-14d9-4028-9842-a00da5264fe9-catalog-content\") pod \"redhat-marketplace-b658q\" (UID: \"7c75de86-14d9-4028-9842-a00da5264fe9\") " pod="openshift-marketplace/redhat-marketplace-b658q" Feb 18 01:58:05 crc kubenswrapper[4858]: I0218 01:58:05.852105 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c75de86-14d9-4028-9842-a00da5264fe9-utilities\") pod \"redhat-marketplace-b658q\" (UID: \"7c75de86-14d9-4028-9842-a00da5264fe9\") " pod="openshift-marketplace/redhat-marketplace-b658q" Feb 18 01:58:05 crc kubenswrapper[4858]: I0218 01:58:05.879262 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4jwq\" (UniqueName: \"kubernetes.io/projected/7c75de86-14d9-4028-9842-a00da5264fe9-kube-api-access-r4jwq\") pod \"redhat-marketplace-b658q\" (UID: \"7c75de86-14d9-4028-9842-a00da5264fe9\") " pod="openshift-marketplace/redhat-marketplace-b658q" Feb 18 01:58:05 crc kubenswrapper[4858]: I0218 01:58:05.975647 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b658q" Feb 18 01:58:06 crc kubenswrapper[4858]: I0218 01:58:06.534463 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b658q"] Feb 18 01:58:06 crc kubenswrapper[4858]: I0218 01:58:06.855398 4858 generic.go:334] "Generic (PLEG): container finished" podID="7c75de86-14d9-4028-9842-a00da5264fe9" containerID="03eb2bbfbf299c3234357de0fa3d8b6ed297f21ed01635ae9e687b60dbe0ecdc" exitCode=0 Feb 18 01:58:06 crc kubenswrapper[4858]: I0218 01:58:06.855848 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b658q" event={"ID":"7c75de86-14d9-4028-9842-a00da5264fe9","Type":"ContainerDied","Data":"03eb2bbfbf299c3234357de0fa3d8b6ed297f21ed01635ae9e687b60dbe0ecdc"} Feb 18 01:58:06 crc kubenswrapper[4858]: I0218 01:58:06.855886 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b658q" event={"ID":"7c75de86-14d9-4028-9842-a00da5264fe9","Type":"ContainerStarted","Data":"0e7ae911132ac3d35697b47a76ad523d1034e42ffe62695ed3300ace154924cd"} Feb 18 01:58:08 crc kubenswrapper[4858]: I0218 01:58:08.485200 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ngw6p/crc-debug-5m5vg"] Feb 18 01:58:08 crc kubenswrapper[4858]: I0218 01:58:08.486781 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngw6p/crc-debug-5m5vg" Feb 18 01:58:08 crc kubenswrapper[4858]: I0218 01:58:08.603931 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2v5jd"] Feb 18 01:58:08 crc kubenswrapper[4858]: I0218 01:58:08.606881 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2v5jd" Feb 18 01:58:08 crc kubenswrapper[4858]: I0218 01:58:08.615908 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4jfb\" (UniqueName: \"kubernetes.io/projected/68071e5f-0198-4d78-a85e-1a66ada6cb87-kube-api-access-d4jfb\") pod \"crc-debug-5m5vg\" (UID: \"68071e5f-0198-4d78-a85e-1a66ada6cb87\") " pod="openshift-must-gather-ngw6p/crc-debug-5m5vg" Feb 18 01:58:08 crc kubenswrapper[4858]: I0218 01:58:08.616280 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68071e5f-0198-4d78-a85e-1a66ada6cb87-host\") pod \"crc-debug-5m5vg\" (UID: \"68071e5f-0198-4d78-a85e-1a66ada6cb87\") " pod="openshift-must-gather-ngw6p/crc-debug-5m5vg" Feb 18 01:58:08 crc kubenswrapper[4858]: I0218 01:58:08.617058 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2v5jd"] Feb 18 01:58:08 crc kubenswrapper[4858]: I0218 01:58:08.718166 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4jfb\" (UniqueName: \"kubernetes.io/projected/68071e5f-0198-4d78-a85e-1a66ada6cb87-kube-api-access-d4jfb\") pod \"crc-debug-5m5vg\" (UID: \"68071e5f-0198-4d78-a85e-1a66ada6cb87\") " pod="openshift-must-gather-ngw6p/crc-debug-5m5vg" Feb 18 01:58:08 crc kubenswrapper[4858]: I0218 01:58:08.718231 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkx8z\" (UniqueName: \"kubernetes.io/projected/aabcb90f-373b-4710-9b8f-65db94fc4add-kube-api-access-vkx8z\") pod \"redhat-operators-2v5jd\" (UID: \"aabcb90f-373b-4710-9b8f-65db94fc4add\") " pod="openshift-marketplace/redhat-operators-2v5jd" Feb 18 01:58:08 crc kubenswrapper[4858]: I0218 01:58:08.718255 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aabcb90f-373b-4710-9b8f-65db94fc4add-catalog-content\") pod \"redhat-operators-2v5jd\" (UID: \"aabcb90f-373b-4710-9b8f-65db94fc4add\") " pod="openshift-marketplace/redhat-operators-2v5jd" Feb 18 01:58:08 crc kubenswrapper[4858]: I0218 01:58:08.718325 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aabcb90f-373b-4710-9b8f-65db94fc4add-utilities\") pod \"redhat-operators-2v5jd\" (UID: \"aabcb90f-373b-4710-9b8f-65db94fc4add\") " pod="openshift-marketplace/redhat-operators-2v5jd" Feb 18 01:58:08 crc kubenswrapper[4858]: I0218 01:58:08.718437 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68071e5f-0198-4d78-a85e-1a66ada6cb87-host\") pod \"crc-debug-5m5vg\" (UID: \"68071e5f-0198-4d78-a85e-1a66ada6cb87\") " pod="openshift-must-gather-ngw6p/crc-debug-5m5vg" Feb 18 01:58:08 crc kubenswrapper[4858]: I0218 01:58:08.718601 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68071e5f-0198-4d78-a85e-1a66ada6cb87-host\") pod \"crc-debug-5m5vg\" (UID: \"68071e5f-0198-4d78-a85e-1a66ada6cb87\") " pod="openshift-must-gather-ngw6p/crc-debug-5m5vg" Feb 18 01:58:08 crc kubenswrapper[4858]: I0218 01:58:08.742513 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4jfb\" (UniqueName: \"kubernetes.io/projected/68071e5f-0198-4d78-a85e-1a66ada6cb87-kube-api-access-d4jfb\") pod \"crc-debug-5m5vg\" (UID: \"68071e5f-0198-4d78-a85e-1a66ada6cb87\") " pod="openshift-must-gather-ngw6p/crc-debug-5m5vg" Feb 18 01:58:08 crc kubenswrapper[4858]: I0218 01:58:08.811894 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngw6p/crc-debug-5m5vg" Feb 18 01:58:08 crc kubenswrapper[4858]: I0218 01:58:08.820649 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkx8z\" (UniqueName: \"kubernetes.io/projected/aabcb90f-373b-4710-9b8f-65db94fc4add-kube-api-access-vkx8z\") pod \"redhat-operators-2v5jd\" (UID: \"aabcb90f-373b-4710-9b8f-65db94fc4add\") " pod="openshift-marketplace/redhat-operators-2v5jd" Feb 18 01:58:08 crc kubenswrapper[4858]: I0218 01:58:08.820690 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aabcb90f-373b-4710-9b8f-65db94fc4add-catalog-content\") pod \"redhat-operators-2v5jd\" (UID: \"aabcb90f-373b-4710-9b8f-65db94fc4add\") " pod="openshift-marketplace/redhat-operators-2v5jd" Feb 18 01:58:08 crc kubenswrapper[4858]: I0218 01:58:08.820756 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aabcb90f-373b-4710-9b8f-65db94fc4add-utilities\") pod \"redhat-operators-2v5jd\" (UID: \"aabcb90f-373b-4710-9b8f-65db94fc4add\") " pod="openshift-marketplace/redhat-operators-2v5jd" Feb 18 01:58:08 crc kubenswrapper[4858]: I0218 01:58:08.821130 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aabcb90f-373b-4710-9b8f-65db94fc4add-catalog-content\") pod \"redhat-operators-2v5jd\" (UID: \"aabcb90f-373b-4710-9b8f-65db94fc4add\") " pod="openshift-marketplace/redhat-operators-2v5jd" Feb 18 01:58:08 crc kubenswrapper[4858]: I0218 01:58:08.821169 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aabcb90f-373b-4710-9b8f-65db94fc4add-utilities\") pod \"redhat-operators-2v5jd\" (UID: \"aabcb90f-373b-4710-9b8f-65db94fc4add\") " pod="openshift-marketplace/redhat-operators-2v5jd" Feb 18 01:58:08 crc kubenswrapper[4858]: I0218 01:58:08.844193 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkx8z\" (UniqueName: \"kubernetes.io/projected/aabcb90f-373b-4710-9b8f-65db94fc4add-kube-api-access-vkx8z\") pod \"redhat-operators-2v5jd\" (UID: \"aabcb90f-373b-4710-9b8f-65db94fc4add\") " pod="openshift-marketplace/redhat-operators-2v5jd" Feb 18 01:58:08 crc kubenswrapper[4858]: I0218 01:58:08.881162 4858 generic.go:334] "Generic (PLEG): container finished" podID="7c75de86-14d9-4028-9842-a00da5264fe9" containerID="e7ed6feb2129ef25cecf5fbcb160a2768062be2a786b11fd5f0456302c5f0075" exitCode=0 Feb 18 01:58:08 crc kubenswrapper[4858]: I0218 01:58:08.881212 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b658q" event={"ID":"7c75de86-14d9-4028-9842-a00da5264fe9","Type":"ContainerDied","Data":"e7ed6feb2129ef25cecf5fbcb160a2768062be2a786b11fd5f0456302c5f0075"} Feb 18 01:58:08 crc kubenswrapper[4858]: I0218 01:58:08.990466 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2v5jd" Feb 18 01:58:09 crc kubenswrapper[4858]: W0218 01:58:09.486013 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaabcb90f_373b_4710_9b8f_65db94fc4add.slice/crio-acd4cf36f3d90d28be3a6dd5c67c6baeba515b6458477c0de003452b6006a8ff WatchSource:0}: Error finding container acd4cf36f3d90d28be3a6dd5c67c6baeba515b6458477c0de003452b6006a8ff: Status 404 returned error can't find the container with id acd4cf36f3d90d28be3a6dd5c67c6baeba515b6458477c0de003452b6006a8ff Feb 18 01:58:09 crc kubenswrapper[4858]: I0218 01:58:09.488191 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2v5jd"] Feb 18 01:58:09 crc kubenswrapper[4858]: I0218 01:58:09.898151 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b658q" event={"ID":"7c75de86-14d9-4028-9842-a00da5264fe9","Type":"ContainerStarted","Data":"0dd42e39ab1169bb08915367522b3ba52cc174b4dce3ee3d258a16938951fe6d"} Feb 18 01:58:09 crc kubenswrapper[4858]: I0218 01:58:09.903420 4858 generic.go:334] "Generic (PLEG): container finished" podID="aabcb90f-373b-4710-9b8f-65db94fc4add" containerID="4093d8b33c991d4dc39633fc70e2acb3145d49f92a310be0a59510438a60ee57" exitCode=0 Feb 18 01:58:09 crc kubenswrapper[4858]: I0218 01:58:09.903470 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v5jd" event={"ID":"aabcb90f-373b-4710-9b8f-65db94fc4add","Type":"ContainerDied","Data":"4093d8b33c991d4dc39633fc70e2acb3145d49f92a310be0a59510438a60ee57"} Feb 18 01:58:09 crc kubenswrapper[4858]: I0218 01:58:09.903491 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v5jd" event={"ID":"aabcb90f-373b-4710-9b8f-65db94fc4add","Type":"ContainerStarted","Data":"acd4cf36f3d90d28be3a6dd5c67c6baeba515b6458477c0de003452b6006a8ff"} Feb 18 01:58:09 crc kubenswrapper[4858]: I0218 01:58:09.916328 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ngw6p/crc-debug-5m5vg" event={"ID":"68071e5f-0198-4d78-a85e-1a66ada6cb87","Type":"ContainerStarted","Data":"858b089cbdff12187128d9e09762d14462b4afb7c13280521e2852b416ce784f"} Feb 18 01:58:09 crc kubenswrapper[4858]: I0218 01:58:09.943296 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b658q" podStartSLOduration=2.486843768 podStartE2EDuration="4.94327824s" podCreationTimestamp="2026-02-18 01:58:05 +0000 UTC" firstStartedPulling="2026-02-18 01:58:06.857754931 +0000 UTC m=+5040.163591703" lastFinishedPulling="2026-02-18 01:58:09.314189443 +0000 UTC m=+5042.620026175" observedRunningTime="2026-02-18 01:58:09.937981792 +0000 UTC m=+5043.243818534" watchObservedRunningTime="2026-02-18 01:58:09.94327824 +0000 UTC m=+5043.249114972" Feb 18 01:58:10 crc kubenswrapper[4858]: E0218 01:58:10.421787 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:58:10 crc kubenswrapper[4858]: I0218 01:58:10.925737 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v5jd" event={"ID":"aabcb90f-373b-4710-9b8f-65db94fc4add","Type":"ContainerStarted","Data":"366c37cc8d3dab2f02156402722c877cb9aba5585ea484ea7c618aa23a6a60f1"} Feb 18 01:58:11 crc kubenswrapper[4858]: E0218 01:58:11.422247 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:58:15 crc kubenswrapper[4858]: I0218 01:58:15.976365 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b658q" Feb 18 01:58:15 crc kubenswrapper[4858]: I0218 01:58:15.976872 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b658q" Feb 18 01:58:16 crc kubenswrapper[4858]: I0218 01:58:16.004535 4858 generic.go:334] "Generic (PLEG): container finished" podID="aabcb90f-373b-4710-9b8f-65db94fc4add" containerID="366c37cc8d3dab2f02156402722c877cb9aba5585ea484ea7c618aa23a6a60f1" exitCode=0 Feb 18 01:58:16 crc kubenswrapper[4858]: I0218 01:58:16.004593 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v5jd" event={"ID":"aabcb90f-373b-4710-9b8f-65db94fc4add","Type":"ContainerDied","Data":"366c37cc8d3dab2f02156402722c877cb9aba5585ea484ea7c618aa23a6a60f1"} Feb 18 01:58:16 crc kubenswrapper[4858]: I0218 01:58:16.420858 4858 scope.go:117] "RemoveContainer" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" Feb 18 01:58:16 crc kubenswrapper[4858]: E0218 01:58:16.421420 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:58:17 crc kubenswrapper[4858]: I0218 01:58:17.025963 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-b658q" podUID="7c75de86-14d9-4028-9842-a00da5264fe9" containerName="registry-server" probeResult="failure" output=< Feb 18 01:58:17 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Feb 18 01:58:17 crc kubenswrapper[4858]: > Feb 18 01:58:22 crc kubenswrapper[4858]: I0218 01:58:22.143044 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ngw6p/crc-debug-5m5vg" event={"ID":"68071e5f-0198-4d78-a85e-1a66ada6cb87","Type":"ContainerStarted","Data":"571ed92584ef9f58a38874759f7856c384db6863e2d627a81f97b4e18da15fe1"} Feb 18 01:58:22 crc kubenswrapper[4858]: I0218 01:58:22.161596 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ngw6p/crc-debug-5m5vg" podStartSLOduration=1.684650338 podStartE2EDuration="14.161544205s" podCreationTimestamp="2026-02-18 01:58:08 +0000 UTC" firstStartedPulling="2026-02-18 01:58:08.888884823 +0000 UTC m=+5042.194721555" lastFinishedPulling="2026-02-18 01:58:21.36577869 +0000 UTC m=+5054.671615422" observedRunningTime="2026-02-18 01:58:22.159063615 +0000 UTC m=+5055.464900337" watchObservedRunningTime="2026-02-18 01:58:22.161544205 +0000 UTC m=+5055.467380937" Feb 18 01:58:22 crc kubenswrapper[4858]: E0218 01:58:22.420893 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:58:23 crc kubenswrapper[4858]: I0218 01:58:23.154442 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v5jd" event={"ID":"aabcb90f-373b-4710-9b8f-65db94fc4add","Type":"ContainerStarted","Data":"e5e7d22dba68de165ca2983d971b3a6a390cf5ef58925788b5da21d4b19c1996"} Feb 18 01:58:23 crc kubenswrapper[4858]: I0218 01:58:23.182481 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2v5jd" podStartSLOduration=3.30596076 podStartE2EDuration="15.182462679s" podCreationTimestamp="2026-02-18 01:58:08 +0000 UTC" firstStartedPulling="2026-02-18 01:58:09.908395523 +0000 UTC m=+5043.214232255" lastFinishedPulling="2026-02-18 01:58:21.784897442 +0000 UTC m=+5055.090734174" observedRunningTime="2026-02-18 01:58:23.179956018 +0000 UTC m=+5056.485792750" watchObservedRunningTime="2026-02-18 01:58:23.182462679 +0000 UTC m=+5056.488299411" Feb 18 01:58:24 crc kubenswrapper[4858]: E0218 01:58:24.421240 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:58:26 crc kubenswrapper[4858]: I0218 01:58:26.041570 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b658q" Feb 18 01:58:26 crc kubenswrapper[4858]: I0218 01:58:26.089013 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b658q" Feb 18 01:58:26 crc kubenswrapper[4858]: I0218 01:58:26.285619 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b658q"] Feb 18 01:58:27 crc kubenswrapper[4858]: I0218 01:58:27.193003 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b658q" podUID="7c75de86-14d9-4028-9842-a00da5264fe9" containerName="registry-server" containerID="cri-o://0dd42e39ab1169bb08915367522b3ba52cc174b4dce3ee3d258a16938951fe6d" gracePeriod=2 Feb 18 01:58:28 crc kubenswrapper[4858]: I0218 01:58:28.206053 4858 generic.go:334] "Generic (PLEG): container finished" podID="7c75de86-14d9-4028-9842-a00da5264fe9" containerID="0dd42e39ab1169bb08915367522b3ba52cc174b4dce3ee3d258a16938951fe6d" exitCode=0 Feb 18 01:58:28 crc kubenswrapper[4858]: I0218 01:58:28.206389 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b658q" event={"ID":"7c75de86-14d9-4028-9842-a00da5264fe9","Type":"ContainerDied","Data":"0dd42e39ab1169bb08915367522b3ba52cc174b4dce3ee3d258a16938951fe6d"} Feb 18 01:58:28 crc kubenswrapper[4858]: I0218 01:58:28.992796 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2v5jd" Feb 18 01:58:28 crc kubenswrapper[4858]: I0218 01:58:28.993146 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2v5jd" Feb 18 01:58:29 crc kubenswrapper[4858]: I0218 01:58:29.419848 4858 scope.go:117] "RemoveContainer" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" Feb 18 01:58:29 crc kubenswrapper[4858]: E0218 01:58:29.420147 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:58:30 crc kubenswrapper[4858]: I0218 01:58:30.050016 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2v5jd" podUID="aabcb90f-373b-4710-9b8f-65db94fc4add" containerName="registry-server" probeResult="failure" output=< Feb 18 01:58:30 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Feb 18 01:58:30 crc kubenswrapper[4858]: > Feb 18 01:58:30 crc kubenswrapper[4858]: I0218 01:58:30.130017 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b658q" Feb 18 01:58:30 crc kubenswrapper[4858]: I0218 01:58:30.223849 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b658q" event={"ID":"7c75de86-14d9-4028-9842-a00da5264fe9","Type":"ContainerDied","Data":"0e7ae911132ac3d35697b47a76ad523d1034e42ffe62695ed3300ace154924cd"} Feb 18 01:58:30 crc kubenswrapper[4858]: I0218 01:58:30.223896 4858 scope.go:117] "RemoveContainer" containerID="0dd42e39ab1169bb08915367522b3ba52cc174b4dce3ee3d258a16938951fe6d" Feb 18 01:58:30 crc kubenswrapper[4858]: I0218 01:58:30.223951 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b658q" Feb 18 01:58:30 crc kubenswrapper[4858]: I0218 01:58:30.248630 4858 scope.go:117] "RemoveContainer" containerID="e7ed6feb2129ef25cecf5fbcb160a2768062be2a786b11fd5f0456302c5f0075" Feb 18 01:58:30 crc kubenswrapper[4858]: I0218 01:58:30.250270 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c75de86-14d9-4028-9842-a00da5264fe9-utilities\") pod \"7c75de86-14d9-4028-9842-a00da5264fe9\" (UID: \"7c75de86-14d9-4028-9842-a00da5264fe9\") " Feb 18 01:58:30 crc kubenswrapper[4858]: I0218 01:58:30.250306 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4jwq\" (UniqueName: \"kubernetes.io/projected/7c75de86-14d9-4028-9842-a00da5264fe9-kube-api-access-r4jwq\") pod \"7c75de86-14d9-4028-9842-a00da5264fe9\" (UID: \"7c75de86-14d9-4028-9842-a00da5264fe9\") " Feb 18 01:58:30 crc kubenswrapper[4858]: I0218 01:58:30.250525 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c75de86-14d9-4028-9842-a00da5264fe9-catalog-content\") pod \"7c75de86-14d9-4028-9842-a00da5264fe9\" (UID: \"7c75de86-14d9-4028-9842-a00da5264fe9\") " Feb 18 01:58:30 crc kubenswrapper[4858]: I0218 01:58:30.251071 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c75de86-14d9-4028-9842-a00da5264fe9-utilities" (OuterVolumeSpecName: "utilities") pod "7c75de86-14d9-4028-9842-a00da5264fe9" (UID: "7c75de86-14d9-4028-9842-a00da5264fe9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:58:30 crc kubenswrapper[4858]: I0218 01:58:30.265116 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c75de86-14d9-4028-9842-a00da5264fe9-kube-api-access-r4jwq" (OuterVolumeSpecName: "kube-api-access-r4jwq") pod "7c75de86-14d9-4028-9842-a00da5264fe9" (UID: "7c75de86-14d9-4028-9842-a00da5264fe9"). InnerVolumeSpecName "kube-api-access-r4jwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:58:30 crc kubenswrapper[4858]: I0218 01:58:30.266023 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c75de86-14d9-4028-9842-a00da5264fe9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c75de86-14d9-4028-9842-a00da5264fe9" (UID: "7c75de86-14d9-4028-9842-a00da5264fe9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:58:30 crc kubenswrapper[4858]: I0218 01:58:30.278083 4858 scope.go:117] "RemoveContainer" containerID="03eb2bbfbf299c3234357de0fa3d8b6ed297f21ed01635ae9e687b60dbe0ecdc" Feb 18 01:58:30 crc kubenswrapper[4858]: I0218 01:58:30.359562 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c75de86-14d9-4028-9842-a00da5264fe9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:58:30 crc kubenswrapper[4858]: I0218 01:58:30.359594 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c75de86-14d9-4028-9842-a00da5264fe9-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:58:30 crc kubenswrapper[4858]: I0218 01:58:30.359606 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4jwq\" (UniqueName: \"kubernetes.io/projected/7c75de86-14d9-4028-9842-a00da5264fe9-kube-api-access-r4jwq\") on node \"crc\" DevicePath \"\"" Feb 18 01:58:30 crc kubenswrapper[4858]: I0218 01:58:30.557819 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b658q"] Feb 18 01:58:30 crc kubenswrapper[4858]: I0218 01:58:30.568786 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b658q"] Feb 18 01:58:31 crc kubenswrapper[4858]: I0218 01:58:31.430552 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c75de86-14d9-4028-9842-a00da5264fe9" path="/var/lib/kubelet/pods/7c75de86-14d9-4028-9842-a00da5264fe9/volumes" Feb 18 01:58:34 crc kubenswrapper[4858]: E0218 01:58:34.421831 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:58:35 crc kubenswrapper[4858]: E0218 01:58:35.421382 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:58:40 crc kubenswrapper[4858]: I0218 01:58:40.059132 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2v5jd" podUID="aabcb90f-373b-4710-9b8f-65db94fc4add" containerName="registry-server" probeResult="failure" output=< Feb 18 01:58:40 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Feb 18 01:58:40 crc kubenswrapper[4858]: > Feb 18 01:58:40 crc kubenswrapper[4858]: I0218 01:58:40.420462 4858 scope.go:117] "RemoveContainer" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" Feb 18 01:58:40 crc kubenswrapper[4858]: E0218 01:58:40.421127 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:58:46 crc kubenswrapper[4858]: E0218 01:58:46.421136 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:58:47 crc kubenswrapper[4858]: E0218 01:58:47.428596 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:58:50 crc kubenswrapper[4858]: I0218 01:58:50.046043 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2v5jd" podUID="aabcb90f-373b-4710-9b8f-65db94fc4add" containerName="registry-server" probeResult="failure" output=< Feb 18 01:58:50 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Feb 18 01:58:50 crc kubenswrapper[4858]: > Feb 18 01:58:51 crc kubenswrapper[4858]: I0218 01:58:51.403861 4858 generic.go:334] "Generic (PLEG): container finished" podID="68071e5f-0198-4d78-a85e-1a66ada6cb87" containerID="571ed92584ef9f58a38874759f7856c384db6863e2d627a81f97b4e18da15fe1" exitCode=0 Feb 18 01:58:51 crc kubenswrapper[4858]: I0218 01:58:51.403936 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ngw6p/crc-debug-5m5vg" event={"ID":"68071e5f-0198-4d78-a85e-1a66ada6cb87","Type":"ContainerDied","Data":"571ed92584ef9f58a38874759f7856c384db6863e2d627a81f97b4e18da15fe1"} Feb 18 01:58:52 crc kubenswrapper[4858]: I0218 01:58:52.420360 4858 scope.go:117] "RemoveContainer" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" Feb 18 01:58:52 crc kubenswrapper[4858]: E0218 01:58:52.420803 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:58:52 crc kubenswrapper[4858]: I0218 01:58:52.540162 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngw6p/crc-debug-5m5vg" Feb 18 01:58:52 crc kubenswrapper[4858]: I0218 01:58:52.572551 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ngw6p/crc-debug-5m5vg"] Feb 18 01:58:52 crc kubenswrapper[4858]: I0218 01:58:52.582082 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ngw6p/crc-debug-5m5vg"] Feb 18 01:58:52 crc kubenswrapper[4858]: I0218 01:58:52.645439 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4jfb\" (UniqueName: \"kubernetes.io/projected/68071e5f-0198-4d78-a85e-1a66ada6cb87-kube-api-access-d4jfb\") pod \"68071e5f-0198-4d78-a85e-1a66ada6cb87\" (UID: \"68071e5f-0198-4d78-a85e-1a66ada6cb87\") " Feb 18 01:58:52 crc kubenswrapper[4858]: I0218 01:58:52.645781 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68071e5f-0198-4d78-a85e-1a66ada6cb87-host\") pod \"68071e5f-0198-4d78-a85e-1a66ada6cb87\" (UID: \"68071e5f-0198-4d78-a85e-1a66ada6cb87\") " Feb 18 01:58:52 crc kubenswrapper[4858]: I0218 01:58:52.646249 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68071e5f-0198-4d78-a85e-1a66ada6cb87-host" (OuterVolumeSpecName: "host") pod "68071e5f-0198-4d78-a85e-1a66ada6cb87" (UID: "68071e5f-0198-4d78-a85e-1a66ada6cb87"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 01:58:52 crc kubenswrapper[4858]: I0218 01:58:52.652772 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68071e5f-0198-4d78-a85e-1a66ada6cb87-kube-api-access-d4jfb" (OuterVolumeSpecName: "kube-api-access-d4jfb") pod "68071e5f-0198-4d78-a85e-1a66ada6cb87" (UID: "68071e5f-0198-4d78-a85e-1a66ada6cb87"). InnerVolumeSpecName "kube-api-access-d4jfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:58:52 crc kubenswrapper[4858]: I0218 01:58:52.747946 4858 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68071e5f-0198-4d78-a85e-1a66ada6cb87-host\") on node \"crc\" DevicePath \"\"" Feb 18 01:58:52 crc kubenswrapper[4858]: I0218 01:58:52.747981 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4jfb\" (UniqueName: \"kubernetes.io/projected/68071e5f-0198-4d78-a85e-1a66ada6cb87-kube-api-access-d4jfb\") on node \"crc\" DevicePath \"\"" Feb 18 01:58:53 crc kubenswrapper[4858]: I0218 01:58:53.427830 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngw6p/crc-debug-5m5vg" Feb 18 01:58:53 crc kubenswrapper[4858]: I0218 01:58:53.434171 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68071e5f-0198-4d78-a85e-1a66ada6cb87" path="/var/lib/kubelet/pods/68071e5f-0198-4d78-a85e-1a66ada6cb87/volumes" Feb 18 01:58:53 crc kubenswrapper[4858]: I0218 01:58:53.435129 4858 scope.go:117] "RemoveContainer" containerID="571ed92584ef9f58a38874759f7856c384db6863e2d627a81f97b4e18da15fe1" Feb 18 01:58:53 crc kubenswrapper[4858]: I0218 01:58:53.766097 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ngw6p/crc-debug-pzqhg"] Feb 18 01:58:53 crc kubenswrapper[4858]: E0218 01:58:53.766595 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68071e5f-0198-4d78-a85e-1a66ada6cb87" containerName="container-00" Feb 18 01:58:53 crc kubenswrapper[4858]: I0218 01:58:53.766617 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="68071e5f-0198-4d78-a85e-1a66ada6cb87" containerName="container-00" Feb 18 01:58:53 crc kubenswrapper[4858]: E0218 01:58:53.766644 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c75de86-14d9-4028-9842-a00da5264fe9" containerName="extract-content" Feb 18 01:58:53 crc kubenswrapper[4858]: I0218 01:58:53.766652 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c75de86-14d9-4028-9842-a00da5264fe9" containerName="extract-content" Feb 18 01:58:53 crc kubenswrapper[4858]: E0218 01:58:53.766686 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c75de86-14d9-4028-9842-a00da5264fe9" containerName="registry-server" Feb 18 01:58:53 crc kubenswrapper[4858]: I0218 01:58:53.766694 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c75de86-14d9-4028-9842-a00da5264fe9" containerName="registry-server" Feb 18 01:58:53 crc kubenswrapper[4858]: E0218 01:58:53.766715 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c75de86-14d9-4028-9842-a00da5264fe9" containerName="extract-utilities" Feb 18 01:58:53 crc kubenswrapper[4858]: I0218 01:58:53.766723 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c75de86-14d9-4028-9842-a00da5264fe9" containerName="extract-utilities" Feb 18 01:58:53 crc kubenswrapper[4858]: I0218 01:58:53.766949 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="68071e5f-0198-4d78-a85e-1a66ada6cb87" containerName="container-00" Feb 18 01:58:53 crc kubenswrapper[4858]: I0218 01:58:53.766989 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c75de86-14d9-4028-9842-a00da5264fe9" containerName="registry-server" Feb 18 01:58:53 crc kubenswrapper[4858]: I0218 01:58:53.767930 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngw6p/crc-debug-pzqhg" Feb 18 01:58:53 crc kubenswrapper[4858]: I0218 01:58:53.870661 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nm9s\" (UniqueName: \"kubernetes.io/projected/e49f9f59-024f-437c-bc67-73e9f06b506f-kube-api-access-4nm9s\") pod \"crc-debug-pzqhg\" (UID: \"e49f9f59-024f-437c-bc67-73e9f06b506f\") " pod="openshift-must-gather-ngw6p/crc-debug-pzqhg" Feb 18 01:58:53 crc kubenswrapper[4858]: I0218 01:58:53.870748 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e49f9f59-024f-437c-bc67-73e9f06b506f-host\") pod \"crc-debug-pzqhg\" (UID: \"e49f9f59-024f-437c-bc67-73e9f06b506f\") " pod="openshift-must-gather-ngw6p/crc-debug-pzqhg" Feb 18 01:58:53 crc kubenswrapper[4858]: I0218 01:58:53.973135 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nm9s\" (UniqueName: \"kubernetes.io/projected/e49f9f59-024f-437c-bc67-73e9f06b506f-kube-api-access-4nm9s\") pod \"crc-debug-pzqhg\" (UID: \"e49f9f59-024f-437c-bc67-73e9f06b506f\") " pod="openshift-must-gather-ngw6p/crc-debug-pzqhg" Feb 18 01:58:53 crc kubenswrapper[4858]: I0218 01:58:53.973213 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e49f9f59-024f-437c-bc67-73e9f06b506f-host\") pod \"crc-debug-pzqhg\" (UID: \"e49f9f59-024f-437c-bc67-73e9f06b506f\") " pod="openshift-must-gather-ngw6p/crc-debug-pzqhg" Feb 18 01:58:53 crc kubenswrapper[4858]: I0218 01:58:53.973329 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e49f9f59-024f-437c-bc67-73e9f06b506f-host\") pod \"crc-debug-pzqhg\" (UID: \"e49f9f59-024f-437c-bc67-73e9f06b506f\") " pod="openshift-must-gather-ngw6p/crc-debug-pzqhg" Feb 18 01:58:53 crc kubenswrapper[4858]: I0218 01:58:53.991610 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nm9s\" (UniqueName: \"kubernetes.io/projected/e49f9f59-024f-437c-bc67-73e9f06b506f-kube-api-access-4nm9s\") pod \"crc-debug-pzqhg\" (UID: \"e49f9f59-024f-437c-bc67-73e9f06b506f\") " pod="openshift-must-gather-ngw6p/crc-debug-pzqhg" Feb 18 01:58:54 crc kubenswrapper[4858]: I0218 01:58:54.091278 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngw6p/crc-debug-pzqhg" Feb 18 01:58:54 crc kubenswrapper[4858]: I0218 01:58:54.437211 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ngw6p/crc-debug-pzqhg" event={"ID":"e49f9f59-024f-437c-bc67-73e9f06b506f","Type":"ContainerStarted","Data":"ab6348b913a80ef858940f90e7ac645ced9fd1384fcd6ae44b9185d966cd723c"} Feb 18 01:58:55 crc kubenswrapper[4858]: I0218 01:58:55.447666 4858 generic.go:334] "Generic (PLEG): container finished" podID="e49f9f59-024f-437c-bc67-73e9f06b506f" containerID="dd46bbf02c0496f5359cf6cf96797bef40f68627002491f08e6763c5400ace08" exitCode=0 Feb 18 01:58:55 crc kubenswrapper[4858]: I0218 01:58:55.447829 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ngw6p/crc-debug-pzqhg" event={"ID":"e49f9f59-024f-437c-bc67-73e9f06b506f","Type":"ContainerDied","Data":"dd46bbf02c0496f5359cf6cf96797bef40f68627002491f08e6763c5400ace08"} Feb 18 01:58:56 crc kubenswrapper[4858]: I0218 01:58:56.084801 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ngw6p/crc-debug-pzqhg"] Feb 18 01:58:56 crc kubenswrapper[4858]: I0218 01:58:56.100930 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ngw6p/crc-debug-pzqhg"] Feb 18 01:58:56 crc kubenswrapper[4858]: I0218 01:58:56.566576 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngw6p/crc-debug-pzqhg" Feb 18 01:58:56 crc kubenswrapper[4858]: I0218 01:58:56.631059 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nm9s\" (UniqueName: \"kubernetes.io/projected/e49f9f59-024f-437c-bc67-73e9f06b506f-kube-api-access-4nm9s\") pod \"e49f9f59-024f-437c-bc67-73e9f06b506f\" (UID: \"e49f9f59-024f-437c-bc67-73e9f06b506f\") " Feb 18 01:58:56 crc kubenswrapper[4858]: I0218 01:58:56.631161 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e49f9f59-024f-437c-bc67-73e9f06b506f-host\") pod \"e49f9f59-024f-437c-bc67-73e9f06b506f\" (UID: \"e49f9f59-024f-437c-bc67-73e9f06b506f\") " Feb 18 01:58:56 crc kubenswrapper[4858]: I0218 01:58:56.631272 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e49f9f59-024f-437c-bc67-73e9f06b506f-host" (OuterVolumeSpecName: "host") pod "e49f9f59-024f-437c-bc67-73e9f06b506f" (UID: "e49f9f59-024f-437c-bc67-73e9f06b506f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 01:58:56 crc kubenswrapper[4858]: I0218 01:58:56.631972 4858 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e49f9f59-024f-437c-bc67-73e9f06b506f-host\") on node \"crc\" DevicePath \"\"" Feb 18 01:58:57 crc kubenswrapper[4858]: I0218 01:58:57.164976 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e49f9f59-024f-437c-bc67-73e9f06b506f-kube-api-access-4nm9s" (OuterVolumeSpecName: "kube-api-access-4nm9s") pod "e49f9f59-024f-437c-bc67-73e9f06b506f" (UID: "e49f9f59-024f-437c-bc67-73e9f06b506f"). InnerVolumeSpecName "kube-api-access-4nm9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:58:57 crc kubenswrapper[4858]: I0218 01:58:57.245373 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nm9s\" (UniqueName: \"kubernetes.io/projected/e49f9f59-024f-437c-bc67-73e9f06b506f-kube-api-access-4nm9s\") on node \"crc\" DevicePath \"\"" Feb 18 01:58:57 crc kubenswrapper[4858]: I0218 01:58:57.394174 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ngw6p/crc-debug-sv475"] Feb 18 01:58:57 crc kubenswrapper[4858]: E0218 01:58:57.394678 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e49f9f59-024f-437c-bc67-73e9f06b506f" containerName="container-00" Feb 18 01:58:57 crc kubenswrapper[4858]: I0218 01:58:57.394697 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e49f9f59-024f-437c-bc67-73e9f06b506f" containerName="container-00" Feb 18 01:58:57 crc kubenswrapper[4858]: I0218 01:58:57.394913 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e49f9f59-024f-437c-bc67-73e9f06b506f" containerName="container-00" Feb 18 01:58:57 crc kubenswrapper[4858]: I0218 01:58:57.395818 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngw6p/crc-debug-sv475" Feb 18 01:58:57 crc kubenswrapper[4858]: I0218 01:58:57.438025 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e49f9f59-024f-437c-bc67-73e9f06b506f" path="/var/lib/kubelet/pods/e49f9f59-024f-437c-bc67-73e9f06b506f/volumes" Feb 18 01:58:57 crc kubenswrapper[4858]: I0218 01:58:57.456315 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69ld8\" (UniqueName: \"kubernetes.io/projected/869308a1-5912-4acc-9a7f-e04eab8b9ac3-kube-api-access-69ld8\") pod \"crc-debug-sv475\" (UID: \"869308a1-5912-4acc-9a7f-e04eab8b9ac3\") " pod="openshift-must-gather-ngw6p/crc-debug-sv475" Feb 18 01:58:57 crc kubenswrapper[4858]: I0218 01:58:57.456401 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/869308a1-5912-4acc-9a7f-e04eab8b9ac3-host\") pod \"crc-debug-sv475\" (UID: \"869308a1-5912-4acc-9a7f-e04eab8b9ac3\") " pod="openshift-must-gather-ngw6p/crc-debug-sv475" Feb 18 01:58:57 crc kubenswrapper[4858]: I0218 01:58:57.468932 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngw6p/crc-debug-pzqhg" Feb 18 01:58:57 crc kubenswrapper[4858]: I0218 01:58:57.468944 4858 scope.go:117] "RemoveContainer" containerID="dd46bbf02c0496f5359cf6cf96797bef40f68627002491f08e6763c5400ace08" Feb 18 01:58:57 crc kubenswrapper[4858]: I0218 01:58:57.557983 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69ld8\" (UniqueName: \"kubernetes.io/projected/869308a1-5912-4acc-9a7f-e04eab8b9ac3-kube-api-access-69ld8\") pod \"crc-debug-sv475\" (UID: \"869308a1-5912-4acc-9a7f-e04eab8b9ac3\") " pod="openshift-must-gather-ngw6p/crc-debug-sv475" Feb 18 01:58:57 crc kubenswrapper[4858]: I0218 01:58:57.558066 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/869308a1-5912-4acc-9a7f-e04eab8b9ac3-host\") pod \"crc-debug-sv475\" (UID: \"869308a1-5912-4acc-9a7f-e04eab8b9ac3\") " pod="openshift-must-gather-ngw6p/crc-debug-sv475" Feb 18 01:58:57 crc kubenswrapper[4858]: I0218 01:58:57.559282 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/869308a1-5912-4acc-9a7f-e04eab8b9ac3-host\") pod \"crc-debug-sv475\" (UID: \"869308a1-5912-4acc-9a7f-e04eab8b9ac3\") " pod="openshift-must-gather-ngw6p/crc-debug-sv475" Feb 18 01:58:57 crc kubenswrapper[4858]: I0218 01:58:57.575270 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69ld8\" (UniqueName: \"kubernetes.io/projected/869308a1-5912-4acc-9a7f-e04eab8b9ac3-kube-api-access-69ld8\") pod \"crc-debug-sv475\" (UID: \"869308a1-5912-4acc-9a7f-e04eab8b9ac3\") " pod="openshift-must-gather-ngw6p/crc-debug-sv475" Feb 18 01:58:57 crc kubenswrapper[4858]: I0218 01:58:57.716535 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngw6p/crc-debug-sv475" Feb 18 01:58:58 crc kubenswrapper[4858]: I0218 01:58:58.479457 4858 generic.go:334] "Generic (PLEG): container finished" podID="869308a1-5912-4acc-9a7f-e04eab8b9ac3" containerID="b34c5ad019e2b3a6088c10442f5ed36d72b81f36110c48f7c3eee5e3100f28d9" exitCode=0 Feb 18 01:58:58 crc kubenswrapper[4858]: I0218 01:58:58.479787 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ngw6p/crc-debug-sv475" event={"ID":"869308a1-5912-4acc-9a7f-e04eab8b9ac3","Type":"ContainerDied","Data":"b34c5ad019e2b3a6088c10442f5ed36d72b81f36110c48f7c3eee5e3100f28d9"} Feb 18 01:58:58 crc kubenswrapper[4858]: I0218 01:58:58.479812 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ngw6p/crc-debug-sv475" event={"ID":"869308a1-5912-4acc-9a7f-e04eab8b9ac3","Type":"ContainerStarted","Data":"86a91226d0777e07eb64cb105055f6ee4f93c8e29fb828ca77d2cd1384a52c8c"} Feb 18 01:58:58 crc kubenswrapper[4858]: I0218 01:58:58.517876 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ngw6p/crc-debug-sv475"] Feb 18 01:58:58 crc kubenswrapper[4858]: I0218 01:58:58.533428 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ngw6p/crc-debug-sv475"] Feb 18 01:58:59 crc kubenswrapper[4858]: I0218 01:58:59.057188 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2v5jd" Feb 18 01:58:59 crc kubenswrapper[4858]: I0218 01:58:59.111469 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2v5jd" Feb 18 01:58:59 crc kubenswrapper[4858]: I0218 01:58:59.310251 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2v5jd"] Feb 18 01:58:59 crc kubenswrapper[4858]: E0218 01:58:59.422618 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:58:59 crc kubenswrapper[4858]: I0218 01:58:59.422847 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 01:58:59 crc kubenswrapper[4858]: E0218 01:58:59.514242 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 01:58:59 crc kubenswrapper[4858]: E0218 01:58:59.514311 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 01:58:59 crc kubenswrapper[4858]: E0218 01:58:59.514487 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t22t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-h2mps_openstack(8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:58:59 crc kubenswrapper[4858]: E0218 01:58:59.515812 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:58:59 crc kubenswrapper[4858]: I0218 01:58:59.626394 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngw6p/crc-debug-sv475" Feb 18 01:58:59 crc kubenswrapper[4858]: I0218 01:58:59.708943 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69ld8\" (UniqueName: \"kubernetes.io/projected/869308a1-5912-4acc-9a7f-e04eab8b9ac3-kube-api-access-69ld8\") pod \"869308a1-5912-4acc-9a7f-e04eab8b9ac3\" (UID: \"869308a1-5912-4acc-9a7f-e04eab8b9ac3\") " Feb 18 01:58:59 crc kubenswrapper[4858]: I0218 01:58:59.709117 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/869308a1-5912-4acc-9a7f-e04eab8b9ac3-host\") pod \"869308a1-5912-4acc-9a7f-e04eab8b9ac3\" (UID: \"869308a1-5912-4acc-9a7f-e04eab8b9ac3\") " Feb 18 01:58:59 crc kubenswrapper[4858]: I0218 01:58:59.709449 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/869308a1-5912-4acc-9a7f-e04eab8b9ac3-host" (OuterVolumeSpecName: "host") pod "869308a1-5912-4acc-9a7f-e04eab8b9ac3" (UID: "869308a1-5912-4acc-9a7f-e04eab8b9ac3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 01:58:59 crc kubenswrapper[4858]: I0218 01:58:59.709892 4858 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/869308a1-5912-4acc-9a7f-e04eab8b9ac3-host\") on node \"crc\" DevicePath \"\"" Feb 18 01:58:59 crc kubenswrapper[4858]: I0218 01:58:59.716699 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869308a1-5912-4acc-9a7f-e04eab8b9ac3-kube-api-access-69ld8" (OuterVolumeSpecName: "kube-api-access-69ld8") pod "869308a1-5912-4acc-9a7f-e04eab8b9ac3" (UID: "869308a1-5912-4acc-9a7f-e04eab8b9ac3"). InnerVolumeSpecName "kube-api-access-69ld8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:58:59 crc kubenswrapper[4858]: I0218 01:58:59.811911 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69ld8\" (UniqueName: \"kubernetes.io/projected/869308a1-5912-4acc-9a7f-e04eab8b9ac3-kube-api-access-69ld8\") on node \"crc\" DevicePath \"\"" Feb 18 01:59:00 crc kubenswrapper[4858]: I0218 01:59:00.502413 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2v5jd" podUID="aabcb90f-373b-4710-9b8f-65db94fc4add" containerName="registry-server" containerID="cri-o://e5e7d22dba68de165ca2983d971b3a6a390cf5ef58925788b5da21d4b19c1996" gracePeriod=2 Feb 18 01:59:00 crc kubenswrapper[4858]: I0218 01:59:00.502789 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngw6p/crc-debug-sv475" Feb 18 01:59:00 crc kubenswrapper[4858]: I0218 01:59:00.503700 4858 scope.go:117] "RemoveContainer" containerID="b34c5ad019e2b3a6088c10442f5ed36d72b81f36110c48f7c3eee5e3100f28d9" Feb 18 01:59:01 crc kubenswrapper[4858]: I0218 01:59:01.076936 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2v5jd" Feb 18 01:59:01 crc kubenswrapper[4858]: I0218 01:59:01.157949 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aabcb90f-373b-4710-9b8f-65db94fc4add-catalog-content\") pod \"aabcb90f-373b-4710-9b8f-65db94fc4add\" (UID: \"aabcb90f-373b-4710-9b8f-65db94fc4add\") " Feb 18 01:59:01 crc kubenswrapper[4858]: I0218 01:59:01.158013 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aabcb90f-373b-4710-9b8f-65db94fc4add-utilities\") pod \"aabcb90f-373b-4710-9b8f-65db94fc4add\" (UID: \"aabcb90f-373b-4710-9b8f-65db94fc4add\") " Feb 18 01:59:01 crc kubenswrapper[4858]: I0218 01:59:01.158072 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkx8z\" (UniqueName: \"kubernetes.io/projected/aabcb90f-373b-4710-9b8f-65db94fc4add-kube-api-access-vkx8z\") pod \"aabcb90f-373b-4710-9b8f-65db94fc4add\" (UID: \"aabcb90f-373b-4710-9b8f-65db94fc4add\") " Feb 18 01:59:01 crc kubenswrapper[4858]: I0218 01:59:01.158839 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aabcb90f-373b-4710-9b8f-65db94fc4add-utilities" (OuterVolumeSpecName: "utilities") pod "aabcb90f-373b-4710-9b8f-65db94fc4add" (UID: "aabcb90f-373b-4710-9b8f-65db94fc4add"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:59:01 crc kubenswrapper[4858]: I0218 01:59:01.159107 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aabcb90f-373b-4710-9b8f-65db94fc4add-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:59:01 crc kubenswrapper[4858]: I0218 01:59:01.163788 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aabcb90f-373b-4710-9b8f-65db94fc4add-kube-api-access-vkx8z" (OuterVolumeSpecName: "kube-api-access-vkx8z") pod "aabcb90f-373b-4710-9b8f-65db94fc4add" (UID: "aabcb90f-373b-4710-9b8f-65db94fc4add"). InnerVolumeSpecName "kube-api-access-vkx8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:59:01 crc kubenswrapper[4858]: I0218 01:59:01.261558 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkx8z\" (UniqueName: \"kubernetes.io/projected/aabcb90f-373b-4710-9b8f-65db94fc4add-kube-api-access-vkx8z\") on node \"crc\" DevicePath \"\"" Feb 18 01:59:01 crc kubenswrapper[4858]: I0218 01:59:01.268223 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aabcb90f-373b-4710-9b8f-65db94fc4add-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aabcb90f-373b-4710-9b8f-65db94fc4add" (UID: "aabcb90f-373b-4710-9b8f-65db94fc4add"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:59:01 crc kubenswrapper[4858]: I0218 01:59:01.363317 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aabcb90f-373b-4710-9b8f-65db94fc4add-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:59:01 crc kubenswrapper[4858]: I0218 01:59:01.431944 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869308a1-5912-4acc-9a7f-e04eab8b9ac3" path="/var/lib/kubelet/pods/869308a1-5912-4acc-9a7f-e04eab8b9ac3/volumes" Feb 18 01:59:01 crc kubenswrapper[4858]: I0218 01:59:01.515348 4858 generic.go:334] "Generic (PLEG): container finished" podID="aabcb90f-373b-4710-9b8f-65db94fc4add" containerID="e5e7d22dba68de165ca2983d971b3a6a390cf5ef58925788b5da21d4b19c1996" exitCode=0 Feb 18 01:59:01 crc kubenswrapper[4858]: I0218 01:59:01.515409 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2v5jd" Feb 18 01:59:01 crc kubenswrapper[4858]: I0218 01:59:01.515416 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v5jd" event={"ID":"aabcb90f-373b-4710-9b8f-65db94fc4add","Type":"ContainerDied","Data":"e5e7d22dba68de165ca2983d971b3a6a390cf5ef58925788b5da21d4b19c1996"} Feb 18 01:59:01 crc kubenswrapper[4858]: I0218 01:59:01.515443 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v5jd" event={"ID":"aabcb90f-373b-4710-9b8f-65db94fc4add","Type":"ContainerDied","Data":"acd4cf36f3d90d28be3a6dd5c67c6baeba515b6458477c0de003452b6006a8ff"} Feb 18 01:59:01 crc kubenswrapper[4858]: I0218 01:59:01.515459 4858 scope.go:117] "RemoveContainer" containerID="e5e7d22dba68de165ca2983d971b3a6a390cf5ef58925788b5da21d4b19c1996" Feb 18 01:59:01 crc kubenswrapper[4858]: I0218 01:59:01.545723 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2v5jd"] Feb 18 01:59:01 crc kubenswrapper[4858]: I0218 01:59:01.550994 4858 scope.go:117] "RemoveContainer" containerID="366c37cc8d3dab2f02156402722c877cb9aba5585ea484ea7c618aa23a6a60f1" Feb 18 01:59:01 crc kubenswrapper[4858]: I0218 01:59:01.564981 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2v5jd"] Feb 18 01:59:01 crc kubenswrapper[4858]: I0218 01:59:01.573345 4858 scope.go:117] "RemoveContainer" containerID="4093d8b33c991d4dc39633fc70e2acb3145d49f92a310be0a59510438a60ee57" Feb 18 01:59:01 crc kubenswrapper[4858]: I0218 01:59:01.624700 4858 scope.go:117] "RemoveContainer" containerID="e5e7d22dba68de165ca2983d971b3a6a390cf5ef58925788b5da21d4b19c1996" Feb 18 01:59:01 crc kubenswrapper[4858]: E0218 01:59:01.625185 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5e7d22dba68de165ca2983d971b3a6a390cf5ef58925788b5da21d4b19c1996\": container with ID starting with e5e7d22dba68de165ca2983d971b3a6a390cf5ef58925788b5da21d4b19c1996 not found: ID does not exist" containerID="e5e7d22dba68de165ca2983d971b3a6a390cf5ef58925788b5da21d4b19c1996" Feb 18 01:59:01 crc kubenswrapper[4858]: I0218 01:59:01.625303 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5e7d22dba68de165ca2983d971b3a6a390cf5ef58925788b5da21d4b19c1996"} err="failed to get container status \"e5e7d22dba68de165ca2983d971b3a6a390cf5ef58925788b5da21d4b19c1996\": rpc error: code = NotFound desc = could not find container \"e5e7d22dba68de165ca2983d971b3a6a390cf5ef58925788b5da21d4b19c1996\": container with ID starting with e5e7d22dba68de165ca2983d971b3a6a390cf5ef58925788b5da21d4b19c1996 not found: ID does not exist" Feb 18 01:59:01 crc kubenswrapper[4858]: I0218 01:59:01.625399 4858 scope.go:117] "RemoveContainer" containerID="366c37cc8d3dab2f02156402722c877cb9aba5585ea484ea7c618aa23a6a60f1" Feb 18 01:59:01 crc kubenswrapper[4858]: E0218 01:59:01.626039 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"366c37cc8d3dab2f02156402722c877cb9aba5585ea484ea7c618aa23a6a60f1\": container with ID starting with 366c37cc8d3dab2f02156402722c877cb9aba5585ea484ea7c618aa23a6a60f1 not found: ID does not exist" containerID="366c37cc8d3dab2f02156402722c877cb9aba5585ea484ea7c618aa23a6a60f1" Feb 18 01:59:01 crc kubenswrapper[4858]: I0218 01:59:01.626095 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"366c37cc8d3dab2f02156402722c877cb9aba5585ea484ea7c618aa23a6a60f1"} err="failed to get container status \"366c37cc8d3dab2f02156402722c877cb9aba5585ea484ea7c618aa23a6a60f1\": rpc error: code = NotFound desc = could not find container \"366c37cc8d3dab2f02156402722c877cb9aba5585ea484ea7c618aa23a6a60f1\": container with ID starting with 366c37cc8d3dab2f02156402722c877cb9aba5585ea484ea7c618aa23a6a60f1 not found: ID does not exist" Feb 18 01:59:01 crc kubenswrapper[4858]: I0218 01:59:01.626133 4858 scope.go:117] "RemoveContainer" containerID="4093d8b33c991d4dc39633fc70e2acb3145d49f92a310be0a59510438a60ee57" Feb 18 01:59:01 crc kubenswrapper[4858]: E0218 01:59:01.626563 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4093d8b33c991d4dc39633fc70e2acb3145d49f92a310be0a59510438a60ee57\": container with ID starting with 4093d8b33c991d4dc39633fc70e2acb3145d49f92a310be0a59510438a60ee57 not found: ID does not exist" containerID="4093d8b33c991d4dc39633fc70e2acb3145d49f92a310be0a59510438a60ee57" Feb 18 01:59:01 crc kubenswrapper[4858]: I0218 01:59:01.626594 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4093d8b33c991d4dc39633fc70e2acb3145d49f92a310be0a59510438a60ee57"} err="failed to get container status \"4093d8b33c991d4dc39633fc70e2acb3145d49f92a310be0a59510438a60ee57\": rpc error: code = NotFound desc = could not find container \"4093d8b33c991d4dc39633fc70e2acb3145d49f92a310be0a59510438a60ee57\": container with ID starting with 4093d8b33c991d4dc39633fc70e2acb3145d49f92a310be0a59510438a60ee57 not found: ID does not exist" Feb 18 01:59:03 crc kubenswrapper[4858]: I0218 01:59:03.419422 4858 scope.go:117] "RemoveContainer" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" Feb 18 01:59:03 crc kubenswrapper[4858]: E0218 01:59:03.420137 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:59:03 crc kubenswrapper[4858]: I0218 01:59:03.435411 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aabcb90f-373b-4710-9b8f-65db94fc4add" path="/var/lib/kubelet/pods/aabcb90f-373b-4710-9b8f-65db94fc4add/volumes" Feb 18 01:59:12 crc kubenswrapper[4858]: E0218 01:59:12.423211 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:59:13 crc kubenswrapper[4858]: E0218 01:59:13.570929 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:59:13 crc kubenswrapper[4858]: E0218 01:59:13.570988 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:59:13 crc kubenswrapper[4858]: E0218 01:59:13.571135 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n75h65dh56ch8chbh597h687h696h57bhdchcbhb6h9h648hb6h686h666h57ch557h55ch68ch76h686h5f7hb7hc6h68hdh67fh8bhd7hbcq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6qb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(1b28954c-8d35-4f43-a44b-307a56f6fff5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:59:13 crc kubenswrapper[4858]: E0218 01:59:13.572387 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:59:17 crc kubenswrapper[4858]: I0218 01:59:17.428293 4858 scope.go:117] "RemoveContainer" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" Feb 18 01:59:17 crc kubenswrapper[4858]: E0218 01:59:17.429168 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:59:23 crc kubenswrapper[4858]: E0218 01:59:23.421943 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:59:26 crc kubenswrapper[4858]: E0218 01:59:26.422750 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:59:30 crc kubenswrapper[4858]: I0218 01:59:30.418953 4858 scope.go:117] "RemoveContainer" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" Feb 18 01:59:30 crc kubenswrapper[4858]: E0218 01:59:30.419615 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:59:37 crc kubenswrapper[4858]: E0218 01:59:37.427254 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:59:40 crc kubenswrapper[4858]: E0218 01:59:40.423698 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:59:44 crc kubenswrapper[4858]: I0218 01:59:44.419464 4858 scope.go:117] "RemoveContainer" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" Feb 18 01:59:44 crc kubenswrapper[4858]: E0218 01:59:44.420132 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:59:50 crc kubenswrapper[4858]: E0218 01:59:50.421200 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 01:59:54 crc kubenswrapper[4858]: E0218 01:59:54.423916 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 01:59:55 crc kubenswrapper[4858]: I0218 01:59:55.081085 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_22183a64-a68c-47af-8352-b04603981c9d/init-config-reloader/0.log" Feb 18 01:59:55 crc kubenswrapper[4858]: I0218 01:59:55.314443 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_22183a64-a68c-47af-8352-b04603981c9d/alertmanager/0.log" Feb 18 01:59:55 crc kubenswrapper[4858]: I0218 01:59:55.372681 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_22183a64-a68c-47af-8352-b04603981c9d/config-reloader/0.log" Feb 18 01:59:55 crc kubenswrapper[4858]: I0218 01:59:55.374607 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_22183a64-a68c-47af-8352-b04603981c9d/init-config-reloader/0.log" Feb 18 01:59:55 crc kubenswrapper[4858]: I0218 01:59:55.513127 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-674dbc688d-knngw_f9ed2521-63c1-48e5-902a-7b92102c74bb/barbican-api/0.log" Feb 18 01:59:55 crc kubenswrapper[4858]: I0218 01:59:55.628903 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-674dbc688d-knngw_f9ed2521-63c1-48e5-902a-7b92102c74bb/barbican-api-log/0.log" Feb 18 01:59:55 crc kubenswrapper[4858]: I0218 01:59:55.672961 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b45d5d658-tw8nb_cb794842-ad8f-4c9f-886b-b96df4bf5e5e/barbican-keystone-listener/0.log" Feb 18 01:59:55 crc kubenswrapper[4858]: I0218 01:59:55.760377 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b45d5d658-tw8nb_cb794842-ad8f-4c9f-886b-b96df4bf5e5e/barbican-keystone-listener-log/0.log" Feb 18 01:59:55 crc kubenswrapper[4858]: I0218 01:59:55.884606 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-74dd7b5ff9-wg9dt_f21330fb-32fb-43a6-afdb-9337c060f960/barbican-worker/0.log" Feb 18 01:59:55 crc kubenswrapper[4858]: I0218 01:59:55.898830 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-74dd7b5ff9-wg9dt_f21330fb-32fb-43a6-afdb-9337c060f960/barbican-worker-log/0.log" Feb 18 01:59:56 crc kubenswrapper[4858]: I0218 01:59:56.343137 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-8qkzf_2b6904c5-bb8c-4534-a12c-723f228bcf32/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 01:59:56 crc kubenswrapper[4858]: I0218 01:59:56.522588 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1b28954c-8d35-4f43-a44b-307a56f6fff5/ceilometer-notification-agent/0.log" Feb 18 01:59:56 crc kubenswrapper[4858]: I0218 01:59:56.564855 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1b28954c-8d35-4f43-a44b-307a56f6fff5/proxy-httpd/0.log" Feb 18 01:59:56 crc kubenswrapper[4858]: I0218 01:59:56.600665 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1b28954c-8d35-4f43-a44b-307a56f6fff5/sg-core/0.log" Feb 18 01:59:56 crc kubenswrapper[4858]: I0218 01:59:56.807589 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2c83714c-1da1-4e6f-81a6-310d3bc6ec44/cinder-api/0.log" Feb 18 01:59:56 crc kubenswrapper[4858]: I0218 01:59:56.824046 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2c83714c-1da1-4e6f-81a6-310d3bc6ec44/cinder-api-log/0.log" Feb 18 01:59:56 crc kubenswrapper[4858]: I0218 01:59:56.855977 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e6282ef1-5606-4bda-aea6-da44f3b7ddca/cinder-scheduler/0.log" Feb 18 01:59:56 crc kubenswrapper[4858]: I0218 01:59:56.991192 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e6282ef1-5606-4bda-aea6-da44f3b7ddca/probe/0.log" Feb 18 01:59:57 crc kubenswrapper[4858]: I0218 01:59:57.066012 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2/cloudkitty-api-log/0.log" Feb 18 01:59:57 crc kubenswrapper[4858]: I0218 01:59:57.124913 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_311f8faa-b6c8-4c0f-875a-cf09c1e9dbf2/cloudkitty-api/0.log" Feb 18 01:59:57 crc kubenswrapper[4858]: I0218 01:59:57.329310 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_284a610d-47d0-4f89-925c-c28aabef77e0/loki-compactor/0.log" Feb 18 01:59:57 crc kubenswrapper[4858]: I0218 01:59:57.519394 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-585d9bcbc-6mvr5_0117af9e-cf65-489b-80f0-8f8c449baf92/loki-distributor/0.log" Feb 18 01:59:57 crc kubenswrapper[4858]: I0218 01:59:57.615374 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-755l8_7c0f3c67-5f6e-4ce3-86c3-a3cbeaaeaa3c/gateway/0.log" Feb 18 01:59:57 crc kubenswrapper[4858]: I0218 01:59:57.746323 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-vtwxf_ef8bfa00-4587-4b2d-9fa9-3f58d3b4ed14/gateway/0.log" Feb 18 01:59:57 crc kubenswrapper[4858]: I0218 01:59:57.878060 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_abc34ee9-ce6b-404e-b4d0-bd6211a3bc72/loki-index-gateway/0.log" Feb 18 01:59:57 crc kubenswrapper[4858]: I0218 01:59:57.993754 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_c716bb3e-01b1-4bc7-a9a2-4604faf684f0/loki-ingester/0.log" Feb 18 01:59:58 crc kubenswrapper[4858]: I0218 01:59:58.105773 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-58c84b5844-v9f9c_8cb4efd7-58cc-48fa-8d37-cd5d97add16c/loki-querier/0.log" Feb 18 01:59:58 crc kubenswrapper[4858]: I0218 01:59:58.237396 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-67bb4dfcd8-zqrb9_a78eeeda-46f2-4d10-b160-97d477d1d80e/loki-query-frontend/0.log" Feb 18 01:59:58 crc kubenswrapper[4858]: I0218 01:59:58.521767 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-8xfk8_d60d959f-1901-4dcb-b7fc-51a6523275a1/init/0.log" Feb 18 01:59:58 crc kubenswrapper[4858]: I0218 01:59:58.789349 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-8xfk8_d60d959f-1901-4dcb-b7fc-51a6523275a1/dnsmasq-dns/0.log" Feb 18 01:59:58 crc kubenswrapper[4858]: I0218 01:59:58.873648 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-8xfk8_d60d959f-1901-4dcb-b7fc-51a6523275a1/init/0.log" Feb 18 01:59:58 crc kubenswrapper[4858]: I0218 01:59:58.889167 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-2qpx6_0882588c-e25d-402e-ba41-76d7bec2ec65/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 01:59:59 crc kubenswrapper[4858]: I0218 01:59:59.347939 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-jx4sj_13898d25-206e-4010-9f2f-54546c48aee6/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 01:59:59 crc kubenswrapper[4858]: I0218 01:59:59.370476 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-dq2tp_84f1880d-a959-4d42-85c2-bf04e0268fda/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 01:59:59 crc kubenswrapper[4858]: I0218 01:59:59.424050 4858 scope.go:117] "RemoveContainer" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" Feb 18 01:59:59 crc kubenswrapper[4858]: E0218 01:59:59.424461 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 01:59:59 crc kubenswrapper[4858]: I0218 01:59:59.528190 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-nn2cq_b76d04a7-6eb2-4a9a-8934-ff0cea670d77/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 01:59:59 crc kubenswrapper[4858]: I0218 01:59:59.651591 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-nz525_93cfe4a9-20e3-4c13-82bb-7c3c634214ce/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 01:59:59 crc kubenswrapper[4858]: I0218 01:59:59.850000 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-pj7s4_65cd5b4f-e1ce-401d-b2e7-9c622282c342/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 01:59:59 crc kubenswrapper[4858]: I0218 01:59:59.951511 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-v55bj_bcd6a468-3c13-4a07-af88-b78f12b9de4f/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.093941 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_58abf118-bedd-4b18-a089-bf4ac9d06f44/glance-httpd/0.log" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.145906 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523000-7cks9"] Feb 18 02:00:00 crc kubenswrapper[4858]: E0218 02:00:00.146302 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869308a1-5912-4acc-9a7f-e04eab8b9ac3" containerName="container-00" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.147288 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="869308a1-5912-4acc-9a7f-e04eab8b9ac3" containerName="container-00" Feb 18 02:00:00 crc kubenswrapper[4858]: E0218 02:00:00.147304 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aabcb90f-373b-4710-9b8f-65db94fc4add" containerName="registry-server" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.147312 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabcb90f-373b-4710-9b8f-65db94fc4add" containerName="registry-server" Feb 18 02:00:00 crc kubenswrapper[4858]: E0218 02:00:00.147326 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aabcb90f-373b-4710-9b8f-65db94fc4add" containerName="extract-utilities" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.147332 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabcb90f-373b-4710-9b8f-65db94fc4add" containerName="extract-utilities" Feb 18 02:00:00 crc kubenswrapper[4858]: E0218 02:00:00.147358 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aabcb90f-373b-4710-9b8f-65db94fc4add" containerName="extract-content" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.147363 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabcb90f-373b-4710-9b8f-65db94fc4add" containerName="extract-content" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.147596 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="869308a1-5912-4acc-9a7f-e04eab8b9ac3" containerName="container-00" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.147610 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="aabcb90f-373b-4710-9b8f-65db94fc4add" containerName="registry-server" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.148370 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-7cks9" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.153477 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.153580 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.178855 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523000-7cks9"] Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.196682 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scl7f\" (UniqueName: \"kubernetes.io/projected/86f0e5d4-b37c-427b-a913-969a53a69c78-kube-api-access-scl7f\") pod \"collect-profiles-29523000-7cks9\" (UID: \"86f0e5d4-b37c-427b-a913-969a53a69c78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-7cks9" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.196753 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86f0e5d4-b37c-427b-a913-969a53a69c78-secret-volume\") pod \"collect-profiles-29523000-7cks9\" (UID: \"86f0e5d4-b37c-427b-a913-969a53a69c78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-7cks9" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.196812 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86f0e5d4-b37c-427b-a913-969a53a69c78-config-volume\") pod \"collect-profiles-29523000-7cks9\" (UID: \"86f0e5d4-b37c-427b-a913-969a53a69c78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-7cks9" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.240161 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_58abf118-bedd-4b18-a089-bf4ac9d06f44/glance-log/0.log" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.299090 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scl7f\" (UniqueName: \"kubernetes.io/projected/86f0e5d4-b37c-427b-a913-969a53a69c78-kube-api-access-scl7f\") pod \"collect-profiles-29523000-7cks9\" (UID: \"86f0e5d4-b37c-427b-a913-969a53a69c78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-7cks9" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.299225 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86f0e5d4-b37c-427b-a913-969a53a69c78-secret-volume\") pod \"collect-profiles-29523000-7cks9\" (UID: \"86f0e5d4-b37c-427b-a913-969a53a69c78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-7cks9" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.300836 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86f0e5d4-b37c-427b-a913-969a53a69c78-config-volume\") pod \"collect-profiles-29523000-7cks9\" (UID: \"86f0e5d4-b37c-427b-a913-969a53a69c78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-7cks9" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.300895 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86f0e5d4-b37c-427b-a913-969a53a69c78-config-volume\") pod \"collect-profiles-29523000-7cks9\" (UID: \"86f0e5d4-b37c-427b-a913-969a53a69c78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-7cks9" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.309079 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86f0e5d4-b37c-427b-a913-969a53a69c78-secret-volume\") pod \"collect-profiles-29523000-7cks9\" (UID: \"86f0e5d4-b37c-427b-a913-969a53a69c78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-7cks9" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.330093 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scl7f\" (UniqueName: \"kubernetes.io/projected/86f0e5d4-b37c-427b-a913-969a53a69c78-kube-api-access-scl7f\") pod \"collect-profiles-29523000-7cks9\" (UID: \"86f0e5d4-b37c-427b-a913-969a53a69c78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-7cks9" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.380169 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d/glance-log/0.log" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.484727 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-7cks9" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.679501 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d9ce7bea-d48d-4bad-8b3c-3e573ae34e6d/glance-httpd/0.log" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.870022 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5cf649f6f9-dtsbl_26a5ef88-d04d-4360-97b2-de3aab55c822/keystone-api/0.log" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.915704 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29522941-8t6vf_2201a764-9f38-4708-b9ef-14515082aae5/keystone-cron/0.log" Feb 18 02:00:00 crc kubenswrapper[4858]: I0218 02:00:00.968679 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523000-7cks9"] Feb 18 02:00:01 crc kubenswrapper[4858]: I0218 02:00:01.026997 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_d212b736-c8c8-43a3-923d-098fe3a06a6b/kube-state-metrics/0.log" Feb 18 02:00:01 crc kubenswrapper[4858]: I0218 02:00:01.145137 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-7cks9" event={"ID":"86f0e5d4-b37c-427b-a913-969a53a69c78","Type":"ContainerStarted","Data":"fbbcaa70df4aba5b6d515d198e3e18aeab6e6e41a5f1b56ea8638b2246bffb75"} Feb 18 02:00:01 crc kubenswrapper[4858]: I0218 02:00:01.298067 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-79f994c65-x27nl_d1f825a6-aa98-4e73-a29c-4b829bf606d6/neutron-api/0.log" Feb 18 02:00:01 crc kubenswrapper[4858]: E0218 02:00:01.423074 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:00:01 crc kubenswrapper[4858]: I0218 02:00:01.490425 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-79f994c65-x27nl_d1f825a6-aa98-4e73-a29c-4b829bf606d6/neutron-httpd/0.log" Feb 18 02:00:02 crc kubenswrapper[4858]: I0218 02:00:02.014677 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fc1dcf66-88aa-4f05-89e7-b107f6a49ce6/nova-api-log/0.log" Feb 18 02:00:02 crc kubenswrapper[4858]: I0218 02:00:02.157122 4858 generic.go:334] "Generic (PLEG): container finished" podID="86f0e5d4-b37c-427b-a913-969a53a69c78" containerID="b0f70c3b6c8f783dca312dd5f4e5426dc065a57733787b04f14311d25370ece1" exitCode=0 Feb 18 02:00:02 crc kubenswrapper[4858]: I0218 02:00:02.157165 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-7cks9" event={"ID":"86f0e5d4-b37c-427b-a913-969a53a69c78","Type":"ContainerDied","Data":"b0f70c3b6c8f783dca312dd5f4e5426dc065a57733787b04f14311d25370ece1"} Feb 18 02:00:02 crc kubenswrapper[4858]: I0218 02:00:02.352894 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_f054135a-7843-4399-9b3e-8d92bb101e7c/nova-cell0-conductor-conductor/0.log" Feb 18 02:00:02 crc kubenswrapper[4858]: I0218 02:00:02.376169 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fc1dcf66-88aa-4f05-89e7-b107f6a49ce6/nova-api-api/0.log" Feb 18 02:00:02 crc kubenswrapper[4858]: I0218 02:00:02.586551 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_46f70137-27be-4f64-9778-cfca8978b247/nova-cell1-conductor-conductor/0.log" Feb 18 02:00:02 crc kubenswrapper[4858]: I0218 02:00:02.683739 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_57fe3041-27cb-4e28-949c-7d5a37d033fc/nova-cell1-novncproxy-novncproxy/0.log" Feb 18 02:00:02 crc kubenswrapper[4858]: I0218 02:00:02.876991 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_024c4106-6664-48f0-a098-6638f4d9a9f5/nova-metadata-log/0.log" Feb 18 02:00:03 crc kubenswrapper[4858]: I0218 02:00:03.212013 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_16fae84a-ee9d-47b2-b83f-35aa53ac7da0/nova-scheduler-scheduler/0.log" Feb 18 02:00:03 crc kubenswrapper[4858]: I0218 02:00:03.505348 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_acb8b920-9bb7-42b7-8bf7-e8f6b5880654/mysql-bootstrap/0.log" Feb 18 02:00:03 crc kubenswrapper[4858]: I0218 02:00:03.626948 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-7cks9" Feb 18 02:00:03 crc kubenswrapper[4858]: I0218 02:00:03.682038 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scl7f\" (UniqueName: \"kubernetes.io/projected/86f0e5d4-b37c-427b-a913-969a53a69c78-kube-api-access-scl7f\") pod \"86f0e5d4-b37c-427b-a913-969a53a69c78\" (UID: \"86f0e5d4-b37c-427b-a913-969a53a69c78\") " Feb 18 02:00:03 crc kubenswrapper[4858]: I0218 02:00:03.682162 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86f0e5d4-b37c-427b-a913-969a53a69c78-secret-volume\") pod \"86f0e5d4-b37c-427b-a913-969a53a69c78\" (UID: \"86f0e5d4-b37c-427b-a913-969a53a69c78\") " Feb 18 02:00:03 crc kubenswrapper[4858]: I0218 02:00:03.682324 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86f0e5d4-b37c-427b-a913-969a53a69c78-config-volume\") pod \"86f0e5d4-b37c-427b-a913-969a53a69c78\" (UID: \"86f0e5d4-b37c-427b-a913-969a53a69c78\") " Feb 18 02:00:03 crc kubenswrapper[4858]: I0218 02:00:03.683365 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86f0e5d4-b37c-427b-a913-969a53a69c78-config-volume" (OuterVolumeSpecName: "config-volume") pod "86f0e5d4-b37c-427b-a913-969a53a69c78" (UID: "86f0e5d4-b37c-427b-a913-969a53a69c78"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 02:00:03 crc kubenswrapper[4858]: I0218 02:00:03.686485 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_acb8b920-9bb7-42b7-8bf7-e8f6b5880654/mysql-bootstrap/0.log" Feb 18 02:00:03 crc kubenswrapper[4858]: I0218 02:00:03.688543 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86f0e5d4-b37c-427b-a913-969a53a69c78-kube-api-access-scl7f" (OuterVolumeSpecName: "kube-api-access-scl7f") pod "86f0e5d4-b37c-427b-a913-969a53a69c78" (UID: "86f0e5d4-b37c-427b-a913-969a53a69c78"). InnerVolumeSpecName "kube-api-access-scl7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 02:00:03 crc kubenswrapper[4858]: I0218 02:00:03.689315 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86f0e5d4-b37c-427b-a913-969a53a69c78-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "86f0e5d4-b37c-427b-a913-969a53a69c78" (UID: "86f0e5d4-b37c-427b-a913-969a53a69c78"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 02:00:03 crc kubenswrapper[4858]: I0218 02:00:03.757268 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_acb8b920-9bb7-42b7-8bf7-e8f6b5880654/galera/0.log" Feb 18 02:00:03 crc kubenswrapper[4858]: I0218 02:00:03.809912 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scl7f\" (UniqueName: \"kubernetes.io/projected/86f0e5d4-b37c-427b-a913-969a53a69c78-kube-api-access-scl7f\") on node \"crc\" DevicePath \"\"" Feb 18 02:00:03 crc kubenswrapper[4858]: I0218 02:00:03.810200 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86f0e5d4-b37c-427b-a913-969a53a69c78-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 02:00:03 crc kubenswrapper[4858]: I0218 02:00:03.810223 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86f0e5d4-b37c-427b-a913-969a53a69c78-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 02:00:03 crc kubenswrapper[4858]: I0218 02:00:03.975162 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a845f908-18e9-47e2-bc4f-01308c8a69b3/mysql-bootstrap/0.log" Feb 18 02:00:04 crc kubenswrapper[4858]: I0218 02:00:04.173887 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-7cks9" event={"ID":"86f0e5d4-b37c-427b-a913-969a53a69c78","Type":"ContainerDied","Data":"fbbcaa70df4aba5b6d515d198e3e18aeab6e6e41a5f1b56ea8638b2246bffb75"} Feb 18 02:00:04 crc kubenswrapper[4858]: I0218 02:00:04.173918 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-7cks9" Feb 18 02:00:04 crc kubenswrapper[4858]: I0218 02:00:04.173933 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbbcaa70df4aba5b6d515d198e3e18aeab6e6e41a5f1b56ea8638b2246bffb75" Feb 18 02:00:04 crc kubenswrapper[4858]: I0218 02:00:04.177273 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a845f908-18e9-47e2-bc4f-01308c8a69b3/mysql-bootstrap/0.log" Feb 18 02:00:04 crc kubenswrapper[4858]: I0218 02:00:04.178410 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a845f908-18e9-47e2-bc4f-01308c8a69b3/galera/0.log" Feb 18 02:00:04 crc kubenswrapper[4858]: I0218 02:00:04.300976 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-proc-0_24a891be-9404-4083-9503-8935ce9545c0/cloudkitty-proc/0.log" Feb 18 02:00:04 crc kubenswrapper[4858]: I0218 02:00:04.443823 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_67301629-da8b-43e3-9c9e-fe99444a6ef1/openstackclient/0.log" Feb 18 02:00:04 crc kubenswrapper[4858]: I0218 02:00:04.614651 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-fvnsh_19953a4a-b2c2-42f5-a48b-a217cf7b7ab0/ovn-controller/0.log" Feb 18 02:00:04 crc kubenswrapper[4858]: I0218 02:00:04.708749 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522955-mxg7m"] Feb 18 02:00:04 crc kubenswrapper[4858]: I0218 02:00:04.720062 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522955-mxg7m"] Feb 18 02:00:04 crc kubenswrapper[4858]: I0218 02:00:04.879087 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_024c4106-6664-48f0-a098-6638f4d9a9f5/nova-metadata-metadata/0.log" Feb 18 02:00:05 crc kubenswrapper[4858]: I0218 02:00:05.442880 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc" path="/var/lib/kubelet/pods/2efbb8f7-f95c-467a-a015-3d7e7cb2c0bc/volumes" Feb 18 02:00:06 crc kubenswrapper[4858]: I0218 02:00:06.364645 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-xxbqx_b624e2b4-b51c-424d-9e84-adc1286475e7/openstack-network-exporter/0.log" Feb 18 02:00:06 crc kubenswrapper[4858]: E0218 02:00:06.420569 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:00:06 crc kubenswrapper[4858]: I0218 02:00:06.429319 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qn9qf_131eb8ce-e6be-487f-b698-370140a1a338/ovsdb-server-init/0.log" Feb 18 02:00:06 crc kubenswrapper[4858]: I0218 02:00:06.590304 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qn9qf_131eb8ce-e6be-487f-b698-370140a1a338/ovsdb-server-init/0.log" Feb 18 02:00:06 crc kubenswrapper[4858]: I0218 02:00:06.607169 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qn9qf_131eb8ce-e6be-487f-b698-370140a1a338/ovsdb-server/0.log" Feb 18 02:00:06 crc kubenswrapper[4858]: I0218 02:00:06.652874 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qn9qf_131eb8ce-e6be-487f-b698-370140a1a338/ovs-vswitchd/0.log" Feb 18 02:00:06 crc kubenswrapper[4858]: I0218 02:00:06.792891 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_53a84120-080a-41f4-a4de-e52521c976c8/openstack-network-exporter/0.log" Feb 18 02:00:06 crc kubenswrapper[4858]: I0218 02:00:06.874783 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_53a84120-080a-41f4-a4de-e52521c976c8/ovn-northd/0.log" Feb 18 02:00:07 crc kubenswrapper[4858]: I0218 02:00:07.005608 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7eb932c6-138e-44fc-b382-6e702ea9d39b/openstack-network-exporter/0.log" Feb 18 02:00:07 crc kubenswrapper[4858]: I0218 02:00:07.025670 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7eb932c6-138e-44fc-b382-6e702ea9d39b/ovsdbserver-nb/0.log" Feb 18 02:00:07 crc kubenswrapper[4858]: I0218 02:00:07.140168 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_af8bc938-e065-4d61-9abe-62806f59470d/openstack-network-exporter/0.log" Feb 18 02:00:07 crc kubenswrapper[4858]: I0218 02:00:07.211872 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_af8bc938-e065-4d61-9abe-62806f59470d/ovsdbserver-sb/0.log" Feb 18 02:00:07 crc kubenswrapper[4858]: I0218 02:00:07.415443 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-666bf74cdd-hjbwv_08bb5fcc-79c7-4733-a26a-192b9b9fa955/placement-api/0.log" Feb 18 02:00:07 crc kubenswrapper[4858]: I0218 02:00:07.477352 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-666bf74cdd-hjbwv_08bb5fcc-79c7-4733-a26a-192b9b9fa955/placement-log/0.log" Feb 18 02:00:08 crc kubenswrapper[4858]: I0218 02:00:08.175749 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_7e61127a-3243-441c-a9e5-8eafb19aeac5/init-config-reloader/0.log" Feb 18 02:00:08 crc kubenswrapper[4858]: I0218 02:00:08.364683 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_7e61127a-3243-441c-a9e5-8eafb19aeac5/config-reloader/0.log" Feb 18 02:00:08 crc kubenswrapper[4858]: I0218 02:00:08.404899 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_7e61127a-3243-441c-a9e5-8eafb19aeac5/init-config-reloader/0.log" Feb 18 02:00:08 crc kubenswrapper[4858]: I0218 02:00:08.433160 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_7e61127a-3243-441c-a9e5-8eafb19aeac5/prometheus/0.log" Feb 18 02:00:08 crc kubenswrapper[4858]: I0218 02:00:08.497758 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_7e61127a-3243-441c-a9e5-8eafb19aeac5/thanos-sidecar/0.log" Feb 18 02:00:08 crc kubenswrapper[4858]: I0218 02:00:08.888364 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_85c1c26c-0457-4e59-b0a5-f62699e06d2c/setup-container/0.log" Feb 18 02:00:09 crc kubenswrapper[4858]: I0218 02:00:09.085009 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_85c1c26c-0457-4e59-b0a5-f62699e06d2c/rabbitmq/0.log" Feb 18 02:00:09 crc kubenswrapper[4858]: I0218 02:00:09.126939 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_85c1c26c-0457-4e59-b0a5-f62699e06d2c/setup-container/0.log" Feb 18 02:00:09 crc kubenswrapper[4858]: I0218 02:00:09.225484 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_930f0d86-3387-4a31-9e89-09f5b92c4ae4/setup-container/0.log" Feb 18 02:00:09 crc kubenswrapper[4858]: I0218 02:00:09.365722 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_930f0d86-3387-4a31-9e89-09f5b92c4ae4/setup-container/0.log" Feb 18 02:00:09 crc kubenswrapper[4858]: I0218 02:00:09.459984 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_930f0d86-3387-4a31-9e89-09f5b92c4ae4/rabbitmq/0.log" Feb 18 02:00:09 crc kubenswrapper[4858]: I0218 02:00:09.491641 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-762ch_15f5690b-3488-41ab-ba71-6aaf7f6b6bbf/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 02:00:09 crc kubenswrapper[4858]: I0218 02:00:09.781954 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-85xzq_0d1d2c63-5add-4004-90e1-54f46ac421e4/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 02:00:09 crc kubenswrapper[4858]: I0218 02:00:09.888614 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-77fb7b987-d9jrg_b6d69568-ccdd-4684-bc2b-6b6893923701/proxy-httpd/0.log" Feb 18 02:00:09 crc kubenswrapper[4858]: I0218 02:00:09.909474 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-77fb7b987-d9jrg_b6d69568-ccdd-4684-bc2b-6b6893923701/proxy-server/0.log" Feb 18 02:00:09 crc kubenswrapper[4858]: I0218 02:00:09.989188 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-gc9g7_bb7b9b3c-2a05-45ae-814b-f7a5058ee1c2/swift-ring-rebalance/0.log" Feb 18 02:00:10 crc kubenswrapper[4858]: I0218 02:00:10.207160 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0600ce0-ec0e-48b8-b22e-7f94ffd40c07/account-auditor/0.log" Feb 18 02:00:10 crc kubenswrapper[4858]: I0218 02:00:10.209045 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0600ce0-ec0e-48b8-b22e-7f94ffd40c07/account-reaper/0.log" Feb 18 02:00:10 crc kubenswrapper[4858]: I0218 02:00:10.233790 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0600ce0-ec0e-48b8-b22e-7f94ffd40c07/account-replicator/0.log" Feb 18 02:00:10 crc kubenswrapper[4858]: I0218 02:00:10.435816 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0600ce0-ec0e-48b8-b22e-7f94ffd40c07/container-server/0.log" Feb 18 02:00:10 crc kubenswrapper[4858]: I0218 02:00:10.437755 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0600ce0-ec0e-48b8-b22e-7f94ffd40c07/account-server/0.log" Feb 18 02:00:10 crc kubenswrapper[4858]: I0218 02:00:10.439094 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0600ce0-ec0e-48b8-b22e-7f94ffd40c07/container-auditor/0.log" Feb 18 02:00:10 crc kubenswrapper[4858]: I0218 02:00:10.583224 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0600ce0-ec0e-48b8-b22e-7f94ffd40c07/container-replicator/0.log" Feb 18 02:00:10 crc kubenswrapper[4858]: I0218 02:00:10.716481 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0600ce0-ec0e-48b8-b22e-7f94ffd40c07/container-updater/0.log" Feb 18 02:00:10 crc kubenswrapper[4858]: I0218 02:00:10.724935 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0600ce0-ec0e-48b8-b22e-7f94ffd40c07/object-expirer/0.log" Feb 18 02:00:10 crc kubenswrapper[4858]: I0218 02:00:10.782252 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0600ce0-ec0e-48b8-b22e-7f94ffd40c07/object-auditor/0.log" Feb 18 02:00:10 crc kubenswrapper[4858]: I0218 02:00:10.868373 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0600ce0-ec0e-48b8-b22e-7f94ffd40c07/object-replicator/0.log" Feb 18 02:00:10 crc kubenswrapper[4858]: I0218 02:00:10.925873 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0600ce0-ec0e-48b8-b22e-7f94ffd40c07/object-updater/0.log" Feb 18 02:00:10 crc kubenswrapper[4858]: I0218 02:00:10.926550 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0600ce0-ec0e-48b8-b22e-7f94ffd40c07/object-server/0.log" Feb 18 02:00:10 crc kubenswrapper[4858]: I0218 02:00:10.969107 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0600ce0-ec0e-48b8-b22e-7f94ffd40c07/rsync/0.log" Feb 18 02:00:11 crc kubenswrapper[4858]: I0218 02:00:11.064077 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d0600ce0-ec0e-48b8-b22e-7f94ffd40c07/swift-recon-cron/0.log" Feb 18 02:00:11 crc kubenswrapper[4858]: I0218 02:00:11.419741 4858 scope.go:117] "RemoveContainer" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" Feb 18 02:00:11 crc kubenswrapper[4858]: E0218 02:00:11.419998 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 02:00:12 crc kubenswrapper[4858]: E0218 02:00:12.421040 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:00:14 crc kubenswrapper[4858]: I0218 02:00:14.646473 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_31807c8a-5224-4df1-a761-10031d623fa5/memcached/0.log" Feb 18 02:00:17 crc kubenswrapper[4858]: E0218 02:00:17.427513 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:00:22 crc kubenswrapper[4858]: I0218 02:00:22.419962 4858 scope.go:117] "RemoveContainer" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" Feb 18 02:00:22 crc kubenswrapper[4858]: E0218 02:00:22.420679 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 02:00:25 crc kubenswrapper[4858]: E0218 02:00:25.421264 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:00:29 crc kubenswrapper[4858]: E0218 02:00:29.421088 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:00:33 crc kubenswrapper[4858]: I0218 02:00:33.791216 4858 scope.go:117] "RemoveContainer" containerID="25b3c2ae65b6b9459c29a744c97bd8150f4b2e6807ef8b2fba493f1c1e322e6f" Feb 18 02:00:37 crc kubenswrapper[4858]: I0218 02:00:37.428058 4858 scope.go:117] "RemoveContainer" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" Feb 18 02:00:37 crc kubenswrapper[4858]: E0218 02:00:37.428822 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 02:00:38 crc kubenswrapper[4858]: E0218 02:00:38.422015 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:00:40 crc kubenswrapper[4858]: I0218 02:00:40.020243 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck_3b97cc05-751a-49e4-b75b-7f2606d14fdf/util/0.log" Feb 18 02:00:40 crc kubenswrapper[4858]: I0218 02:00:40.245574 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck_3b97cc05-751a-49e4-b75b-7f2606d14fdf/util/0.log" Feb 18 02:00:40 crc kubenswrapper[4858]: I0218 02:00:40.272020 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck_3b97cc05-751a-49e4-b75b-7f2606d14fdf/pull/0.log" Feb 18 02:00:40 crc kubenswrapper[4858]: I0218 02:00:40.280838 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck_3b97cc05-751a-49e4-b75b-7f2606d14fdf/pull/0.log" Feb 18 02:00:40 crc kubenswrapper[4858]: E0218 02:00:40.421678 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:00:40 crc kubenswrapper[4858]: I0218 02:00:40.516638 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck_3b97cc05-751a-49e4-b75b-7f2606d14fdf/extract/0.log" Feb 18 02:00:40 crc kubenswrapper[4858]: I0218 02:00:40.524538 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck_3b97cc05-751a-49e4-b75b-7f2606d14fdf/util/0.log" Feb 18 02:00:40 crc kubenswrapper[4858]: I0218 02:00:40.731001 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e72qbck_3b97cc05-751a-49e4-b75b-7f2606d14fdf/pull/0.log" Feb 18 02:00:41 crc kubenswrapper[4858]: I0218 02:00:41.067435 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-bkng8_ca07929f-e6f1-4b35-bcd8-b8a8c2fa6ce6/manager/0.log" Feb 18 02:00:41 crc kubenswrapper[4858]: I0218 02:00:41.396335 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-ksv8b_597262ab-929d-4c51-8400-d6a6df47dcbd/manager/0.log" Feb 18 02:00:41 crc kubenswrapper[4858]: I0218 02:00:41.651994 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-74zsv_9df9a5db-2273-4253-9b76-b67377d8f7f6/manager/0.log" Feb 18 02:00:41 crc kubenswrapper[4858]: I0218 02:00:41.890353 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-rlds4_b0ca0509-6112-4163-a060-ea15122be64a/manager/0.log" Feb 18 02:00:42 crc kubenswrapper[4858]: I0218 02:00:42.400986 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-rzzqb_758bf8e1-fe1b-4c02-8ad8-6d80237e0024/manager/0.log" Feb 18 02:00:42 crc kubenswrapper[4858]: I0218 02:00:42.553324 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-ndk6f_58a2adef-c01f-464e-aa1d-8c2d8a6e5c58/manager/0.log" Feb 18 02:00:42 crc kubenswrapper[4858]: I0218 02:00:42.867062 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-qxqhh_f5dba120-621f-4686-8e83-6f10779d8cfb/manager/0.log" Feb 18 02:00:43 crc kubenswrapper[4858]: I0218 02:00:43.020339 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-kvqvz_bddd921f-895d-4b1d-8203-2aff8a721ed9/manager/0.log" Feb 18 02:00:43 crc kubenswrapper[4858]: I0218 02:00:43.315076 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-xwgm9_28b5bfad-085d-48c6-b15f-c431d57de698/manager/0.log" Feb 18 02:00:43 crc kubenswrapper[4858]: I0218 02:00:43.611974 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-lrmvx_c33cc4eb-a44e-4b2f-8ea8-1688d831a12a/manager/0.log" Feb 18 02:00:43 crc kubenswrapper[4858]: I0218 02:00:43.617651 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-qqgpg_dda54f36-cfc8-468e-8101-f8041735931f/manager/0.log" Feb 18 02:00:43 crc kubenswrapper[4858]: I0218 02:00:43.961282 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-8v5bz_11bc7389-c53b-4030-892b-43da85d70fe1/manager/0.log" Feb 18 02:00:44 crc kubenswrapper[4858]: I0218 02:00:44.365567 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cd82rn_229552d0-e72e-49af-a4c7-6052e2a7bf5a/manager/0.log" Feb 18 02:00:44 crc kubenswrapper[4858]: I0218 02:00:44.712407 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-69ff8ccd5-kwxmx_05ed6418-42b7-4994-9e6b-ced846840c80/operator/0.log" Feb 18 02:00:45 crc kubenswrapper[4858]: I0218 02:00:45.037973 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-sq8rn_a0f2c0db-96cb-4884-80fe-20adeb5728cf/registry-server/0.log" Feb 18 02:00:45 crc kubenswrapper[4858]: I0218 02:00:45.351656 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-9k4wv_860622ee-6268-4ff0-a2ae-403ae8b984fc/manager/0.log" Feb 18 02:00:45 crc kubenswrapper[4858]: I0218 02:00:45.606790 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-5b4nx_447c1cfc-d76f-4985-bd95-285a3fbc63cc/manager/0.log" Feb 18 02:00:45 crc kubenswrapper[4858]: I0218 02:00:45.844256 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-dqvkf_b83c91fe-13d0-4711-9f90-3da887fa657d/operator/0.log" Feb 18 02:00:46 crc kubenswrapper[4858]: I0218 02:00:46.047161 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-xhtjl_eae2173c-97fd-4d89-8d72-0d44f7c87f9b/manager/0.log" Feb 18 02:00:46 crc kubenswrapper[4858]: I0218 02:00:46.542810 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-ghrkx_12badb74-0862-49e0-95a9-2e29d4b8dcf7/manager/0.log" Feb 18 02:00:46 crc kubenswrapper[4858]: I0218 02:00:46.675593 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-669759659c-2sgf5_577edb6b-435b-4d2e-bb6c-3f9c7bac9256/manager/0.log" Feb 18 02:00:46 crc kubenswrapper[4858]: I0218 02:00:46.957194 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-cqmz8_54724d5e-2417-4241-9fd0-36f9e3c72124/manager/0.log" Feb 18 02:00:47 crc kubenswrapper[4858]: I0218 02:00:47.003608 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-dm2f9_f3e44d9b-6d44-4aa9-9100-c2e139131ec9/manager/0.log" Feb 18 02:00:47 crc kubenswrapper[4858]: I0218 02:00:47.109521 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-c6f9cb8b-f7txj_e60cf8fd-9033-4f85-a2a1-16441bd58a56/manager/0.log" Feb 18 02:00:48 crc kubenswrapper[4858]: I0218 02:00:48.419436 4858 scope.go:117] "RemoveContainer" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" Feb 18 02:00:48 crc kubenswrapper[4858]: E0218 02:00:48.419879 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 02:00:51 crc kubenswrapper[4858]: E0218 02:00:51.420677 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:00:51 crc kubenswrapper[4858]: E0218 02:00:51.421450 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:00:52 crc kubenswrapper[4858]: I0218 02:00:52.810933 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-8hqkm_e28fd875-635a-43eb-ae2e-2544aa39cc84/manager/0.log" Feb 18 02:01:00 crc kubenswrapper[4858]: I0218 02:01:00.154325 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29523001-j8pzm"] Feb 18 02:01:00 crc kubenswrapper[4858]: E0218 02:01:00.155654 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86f0e5d4-b37c-427b-a913-969a53a69c78" containerName="collect-profiles" Feb 18 02:01:00 crc kubenswrapper[4858]: I0218 02:01:00.155677 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="86f0e5d4-b37c-427b-a913-969a53a69c78" containerName="collect-profiles" Feb 18 02:01:00 crc kubenswrapper[4858]: I0218 02:01:00.156122 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="86f0e5d4-b37c-427b-a913-969a53a69c78" containerName="collect-profiles" Feb 18 02:01:00 crc kubenswrapper[4858]: I0218 02:01:00.157368 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523001-j8pzm" Feb 18 02:01:00 crc kubenswrapper[4858]: I0218 02:01:00.162253 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29523001-j8pzm"] Feb 18 02:01:00 crc kubenswrapper[4858]: I0218 02:01:00.310593 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec45d1bb-fc90-4839-a420-ca3f822bd158-combined-ca-bundle\") pod \"keystone-cron-29523001-j8pzm\" (UID: \"ec45d1bb-fc90-4839-a420-ca3f822bd158\") " pod="openstack/keystone-cron-29523001-j8pzm" Feb 18 02:01:00 crc kubenswrapper[4858]: I0218 02:01:00.310972 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ec45d1bb-fc90-4839-a420-ca3f822bd158-fernet-keys\") pod \"keystone-cron-29523001-j8pzm\" (UID: \"ec45d1bb-fc90-4839-a420-ca3f822bd158\") " pod="openstack/keystone-cron-29523001-j8pzm" Feb 18 02:01:00 crc kubenswrapper[4858]: I0218 02:01:00.311094 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws5tl\" (UniqueName: \"kubernetes.io/projected/ec45d1bb-fc90-4839-a420-ca3f822bd158-kube-api-access-ws5tl\") pod \"keystone-cron-29523001-j8pzm\" (UID: \"ec45d1bb-fc90-4839-a420-ca3f822bd158\") " pod="openstack/keystone-cron-29523001-j8pzm" Feb 18 02:01:00 crc kubenswrapper[4858]: I0218 02:01:00.311117 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec45d1bb-fc90-4839-a420-ca3f822bd158-config-data\") pod \"keystone-cron-29523001-j8pzm\" (UID: \"ec45d1bb-fc90-4839-a420-ca3f822bd158\") " pod="openstack/keystone-cron-29523001-j8pzm" Feb 18 02:01:00 crc kubenswrapper[4858]: I0218 02:01:00.414037 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws5tl\" (UniqueName: \"kubernetes.io/projected/ec45d1bb-fc90-4839-a420-ca3f822bd158-kube-api-access-ws5tl\") pod \"keystone-cron-29523001-j8pzm\" (UID: \"ec45d1bb-fc90-4839-a420-ca3f822bd158\") " pod="openstack/keystone-cron-29523001-j8pzm" Feb 18 02:01:00 crc kubenswrapper[4858]: I0218 02:01:00.414135 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec45d1bb-fc90-4839-a420-ca3f822bd158-config-data\") pod \"keystone-cron-29523001-j8pzm\" (UID: \"ec45d1bb-fc90-4839-a420-ca3f822bd158\") " pod="openstack/keystone-cron-29523001-j8pzm" Feb 18 02:01:00 crc kubenswrapper[4858]: I0218 02:01:00.414256 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec45d1bb-fc90-4839-a420-ca3f822bd158-combined-ca-bundle\") pod \"keystone-cron-29523001-j8pzm\" (UID: \"ec45d1bb-fc90-4839-a420-ca3f822bd158\") " pod="openstack/keystone-cron-29523001-j8pzm" Feb 18 02:01:00 crc kubenswrapper[4858]: I0218 02:01:00.414441 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ec45d1bb-fc90-4839-a420-ca3f822bd158-fernet-keys\") pod \"keystone-cron-29523001-j8pzm\" (UID: \"ec45d1bb-fc90-4839-a420-ca3f822bd158\") " pod="openstack/keystone-cron-29523001-j8pzm" Feb 18 02:01:00 crc kubenswrapper[4858]: I0218 02:01:00.561627 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ec45d1bb-fc90-4839-a420-ca3f822bd158-fernet-keys\") pod \"keystone-cron-29523001-j8pzm\" (UID: \"ec45d1bb-fc90-4839-a420-ca3f822bd158\") " pod="openstack/keystone-cron-29523001-j8pzm" Feb 18 02:01:00 crc kubenswrapper[4858]: I0218 02:01:00.561701 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec45d1bb-fc90-4839-a420-ca3f822bd158-combined-ca-bundle\") pod \"keystone-cron-29523001-j8pzm\" (UID: \"ec45d1bb-fc90-4839-a420-ca3f822bd158\") " pod="openstack/keystone-cron-29523001-j8pzm" Feb 18 02:01:00 crc kubenswrapper[4858]: I0218 02:01:00.562748 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec45d1bb-fc90-4839-a420-ca3f822bd158-config-data\") pod \"keystone-cron-29523001-j8pzm\" (UID: \"ec45d1bb-fc90-4839-a420-ca3f822bd158\") " pod="openstack/keystone-cron-29523001-j8pzm" Feb 18 02:01:00 crc kubenswrapper[4858]: I0218 02:01:00.564217 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws5tl\" (UniqueName: \"kubernetes.io/projected/ec45d1bb-fc90-4839-a420-ca3f822bd158-kube-api-access-ws5tl\") pod \"keystone-cron-29523001-j8pzm\" (UID: \"ec45d1bb-fc90-4839-a420-ca3f822bd158\") " pod="openstack/keystone-cron-29523001-j8pzm" Feb 18 02:01:00 crc kubenswrapper[4858]: I0218 02:01:00.774274 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523001-j8pzm" Feb 18 02:01:01 crc kubenswrapper[4858]: I0218 02:01:01.276702 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29523001-j8pzm"] Feb 18 02:01:01 crc kubenswrapper[4858]: I0218 02:01:01.662507 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523001-j8pzm" event={"ID":"ec45d1bb-fc90-4839-a420-ca3f822bd158","Type":"ContainerStarted","Data":"8bfbaee3d9297e48d386e5ac84cf202de0c306a0f727534726315dc7aeabf355"} Feb 18 02:01:01 crc kubenswrapper[4858]: I0218 02:01:01.663081 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523001-j8pzm" event={"ID":"ec45d1bb-fc90-4839-a420-ca3f822bd158","Type":"ContainerStarted","Data":"bd298965f43b97b40508f6f97282c31715b3394a8e2f4cbb6c621ad5dbbd6d06"} Feb 18 02:01:01 crc kubenswrapper[4858]: I0218 02:01:01.689854 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29523001-j8pzm" podStartSLOduration=1.689830904 podStartE2EDuration="1.689830904s" podCreationTimestamp="2026-02-18 02:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 02:01:01.688943242 +0000 UTC m=+5214.994779984" watchObservedRunningTime="2026-02-18 02:01:01.689830904 +0000 UTC m=+5214.995667646" Feb 18 02:01:02 crc kubenswrapper[4858]: I0218 02:01:02.419980 4858 scope.go:117] "RemoveContainer" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" Feb 18 02:01:02 crc kubenswrapper[4858]: E0218 02:01:02.420315 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 02:01:02 crc kubenswrapper[4858]: E0218 02:01:02.421620 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:01:02 crc kubenswrapper[4858]: E0218 02:01:02.422111 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:01:04 crc kubenswrapper[4858]: I0218 02:01:04.692849 4858 generic.go:334] "Generic (PLEG): container finished" podID="ec45d1bb-fc90-4839-a420-ca3f822bd158" containerID="8bfbaee3d9297e48d386e5ac84cf202de0c306a0f727534726315dc7aeabf355" exitCode=0 Feb 18 02:01:04 crc kubenswrapper[4858]: I0218 02:01:04.694300 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523001-j8pzm" event={"ID":"ec45d1bb-fc90-4839-a420-ca3f822bd158","Type":"ContainerDied","Data":"8bfbaee3d9297e48d386e5ac84cf202de0c306a0f727534726315dc7aeabf355"} Feb 18 02:01:06 crc kubenswrapper[4858]: I0218 02:01:06.135462 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523001-j8pzm" Feb 18 02:01:06 crc kubenswrapper[4858]: I0218 02:01:06.241963 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec45d1bb-fc90-4839-a420-ca3f822bd158-combined-ca-bundle\") pod \"ec45d1bb-fc90-4839-a420-ca3f822bd158\" (UID: \"ec45d1bb-fc90-4839-a420-ca3f822bd158\") " Feb 18 02:01:06 crc kubenswrapper[4858]: I0218 02:01:06.242041 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws5tl\" (UniqueName: \"kubernetes.io/projected/ec45d1bb-fc90-4839-a420-ca3f822bd158-kube-api-access-ws5tl\") pod \"ec45d1bb-fc90-4839-a420-ca3f822bd158\" (UID: \"ec45d1bb-fc90-4839-a420-ca3f822bd158\") " Feb 18 02:01:06 crc kubenswrapper[4858]: I0218 02:01:06.242085 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ec45d1bb-fc90-4839-a420-ca3f822bd158-fernet-keys\") pod \"ec45d1bb-fc90-4839-a420-ca3f822bd158\" (UID: \"ec45d1bb-fc90-4839-a420-ca3f822bd158\") " Feb 18 02:01:06 crc kubenswrapper[4858]: I0218 02:01:06.242259 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec45d1bb-fc90-4839-a420-ca3f822bd158-config-data\") pod \"ec45d1bb-fc90-4839-a420-ca3f822bd158\" (UID: \"ec45d1bb-fc90-4839-a420-ca3f822bd158\") " Feb 18 02:01:06 crc kubenswrapper[4858]: I0218 02:01:06.247734 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec45d1bb-fc90-4839-a420-ca3f822bd158-kube-api-access-ws5tl" (OuterVolumeSpecName: "kube-api-access-ws5tl") pod "ec45d1bb-fc90-4839-a420-ca3f822bd158" (UID: "ec45d1bb-fc90-4839-a420-ca3f822bd158"). InnerVolumeSpecName "kube-api-access-ws5tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 02:01:06 crc kubenswrapper[4858]: I0218 02:01:06.247873 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec45d1bb-fc90-4839-a420-ca3f822bd158-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ec45d1bb-fc90-4839-a420-ca3f822bd158" (UID: "ec45d1bb-fc90-4839-a420-ca3f822bd158"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 02:01:06 crc kubenswrapper[4858]: I0218 02:01:06.326638 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec45d1bb-fc90-4839-a420-ca3f822bd158-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec45d1bb-fc90-4839-a420-ca3f822bd158" (UID: "ec45d1bb-fc90-4839-a420-ca3f822bd158"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 02:01:06 crc kubenswrapper[4858]: I0218 02:01:06.337542 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec45d1bb-fc90-4839-a420-ca3f822bd158-config-data" (OuterVolumeSpecName: "config-data") pod "ec45d1bb-fc90-4839-a420-ca3f822bd158" (UID: "ec45d1bb-fc90-4839-a420-ca3f822bd158"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 02:01:06 crc kubenswrapper[4858]: I0218 02:01:06.347961 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec45d1bb-fc90-4839-a420-ca3f822bd158-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 02:01:06 crc kubenswrapper[4858]: I0218 02:01:06.347996 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec45d1bb-fc90-4839-a420-ca3f822bd158-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 02:01:06 crc kubenswrapper[4858]: I0218 02:01:06.348007 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws5tl\" (UniqueName: \"kubernetes.io/projected/ec45d1bb-fc90-4839-a420-ca3f822bd158-kube-api-access-ws5tl\") on node \"crc\" DevicePath \"\"" Feb 18 02:01:06 crc kubenswrapper[4858]: I0218 02:01:06.348017 4858 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ec45d1bb-fc90-4839-a420-ca3f822bd158-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 02:01:06 crc kubenswrapper[4858]: I0218 02:01:06.722559 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523001-j8pzm" event={"ID":"ec45d1bb-fc90-4839-a420-ca3f822bd158","Type":"ContainerDied","Data":"bd298965f43b97b40508f6f97282c31715b3394a8e2f4cbb6c621ad5dbbd6d06"} Feb 18 02:01:06 crc kubenswrapper[4858]: I0218 02:01:06.722903 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd298965f43b97b40508f6f97282c31715b3394a8e2f4cbb6c621ad5dbbd6d06" Feb 18 02:01:06 crc kubenswrapper[4858]: I0218 02:01:06.722655 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523001-j8pzm" Feb 18 02:01:11 crc kubenswrapper[4858]: I0218 02:01:11.857285 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-574cj_93a92f34-d9a8-4276-8a97-3f129c4db452/control-plane-machine-set-operator/0.log" Feb 18 02:01:12 crc kubenswrapper[4858]: I0218 02:01:12.021434 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-27k9h_6269b5c1-c01e-4f81-8c44-94455a9cc858/kube-rbac-proxy/0.log" Feb 18 02:01:12 crc kubenswrapper[4858]: I0218 02:01:12.066306 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-27k9h_6269b5c1-c01e-4f81-8c44-94455a9cc858/machine-api-operator/0.log" Feb 18 02:01:13 crc kubenswrapper[4858]: E0218 02:01:13.421947 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:01:15 crc kubenswrapper[4858]: I0218 02:01:15.434030 4858 scope.go:117] "RemoveContainer" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" Feb 18 02:01:15 crc kubenswrapper[4858]: E0218 02:01:15.436458 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:01:15 crc kubenswrapper[4858]: E0218 02:01:15.439135 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 02:01:26 crc kubenswrapper[4858]: I0218 02:01:26.420781 4858 scope.go:117] "RemoveContainer" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" Feb 18 02:01:26 crc kubenswrapper[4858]: E0218 02:01:26.422206 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 02:01:26 crc kubenswrapper[4858]: E0218 02:01:26.422748 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:01:27 crc kubenswrapper[4858]: E0218 02:01:27.426125 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:01:28 crc kubenswrapper[4858]: I0218 02:01:28.033835 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-bjzvr_d49e20f5-2603-45f9-8250-61044120864d/cert-manager-controller/0.log" Feb 18 02:01:28 crc kubenswrapper[4858]: I0218 02:01:28.232162 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-mg9m6_08027ec7-d21f-49db-86fa-f66a295a15ab/cert-manager-cainjector/0.log" Feb 18 02:01:28 crc kubenswrapper[4858]: I0218 02:01:28.293218 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-j4mwd_9e4af7ad-05c1-4d35-9f79-dfb6aa002f52/cert-manager-webhook/0.log" Feb 18 02:01:38 crc kubenswrapper[4858]: E0218 02:01:38.421474 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:01:39 crc kubenswrapper[4858]: E0218 02:01:39.422072 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:01:40 crc kubenswrapper[4858]: I0218 02:01:40.419973 4858 scope.go:117] "RemoveContainer" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" Feb 18 02:01:40 crc kubenswrapper[4858]: E0218 02:01:40.420638 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 02:01:44 crc kubenswrapper[4858]: I0218 02:01:44.688766 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-sfxkk_95ad9559-743e-4d16-8dba-6cea830de767/nmstate-console-plugin/0.log" Feb 18 02:01:44 crc kubenswrapper[4858]: I0218 02:01:44.785726 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-gjmb7_c83e1b85-4bb0-47f8-b152-a5f5c34cc919/nmstate-handler/0.log" Feb 18 02:01:44 crc kubenswrapper[4858]: I0218 02:01:44.894305 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-jsdd7_897ba371-53cf-440a-9045-2d45bfae9032/kube-rbac-proxy/0.log" Feb 18 02:01:44 crc kubenswrapper[4858]: I0218 02:01:44.926647 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-jsdd7_897ba371-53cf-440a-9045-2d45bfae9032/nmstate-metrics/0.log" Feb 18 02:01:45 crc kubenswrapper[4858]: I0218 02:01:45.086998 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-wq6f6_3d9133a3-024f-4621-a1e2-c7393b87df23/nmstate-operator/0.log" Feb 18 02:01:45 crc kubenswrapper[4858]: I0218 02:01:45.164269 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-6nkwp_9ccccd6f-f4c0-4948-a851-e837f10702c3/nmstate-webhook/0.log" Feb 18 02:01:49 crc kubenswrapper[4858]: E0218 02:01:49.422452 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:01:51 crc kubenswrapper[4858]: E0218 02:01:51.420668 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:01:52 crc kubenswrapper[4858]: I0218 02:01:52.419387 4858 scope.go:117] "RemoveContainer" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" Feb 18 02:01:52 crc kubenswrapper[4858]: E0218 02:01:52.420163 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 02:02:00 crc kubenswrapper[4858]: I0218 02:02:00.605404 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5c5fb49d49-cxcxg_b2422dde-b68b-41d0-acbf-2473c28f5177/kube-rbac-proxy/0.log" Feb 18 02:02:00 crc kubenswrapper[4858]: I0218 02:02:00.668912 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5c5fb49d49-cxcxg_b2422dde-b68b-41d0-acbf-2473c28f5177/manager/0.log" Feb 18 02:02:01 crc kubenswrapper[4858]: E0218 02:02:01.421980 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:02:03 crc kubenswrapper[4858]: E0218 02:02:03.421704 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:02:06 crc kubenswrapper[4858]: I0218 02:02:06.420456 4858 scope.go:117] "RemoveContainer" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" Feb 18 02:02:06 crc kubenswrapper[4858]: E0218 02:02:06.421244 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 02:02:13 crc kubenswrapper[4858]: E0218 02:02:13.422413 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:02:15 crc kubenswrapper[4858]: E0218 02:02:15.423342 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:02:16 crc kubenswrapper[4858]: I0218 02:02:16.853170 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-kmhxx_560a0ca4-78ca-406c-a540-51483acdb0f8/prometheus-operator/0.log" Feb 18 02:02:17 crc kubenswrapper[4858]: I0218 02:02:17.027644 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs_d28ad27c-eed0-473d-9257-1ea8f6c7291c/prometheus-operator-admission-webhook/0.log" Feb 18 02:02:17 crc kubenswrapper[4858]: I0218 02:02:17.144724 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c_5d6270c6-d227-4243-b495-19306dfa376c/prometheus-operator-admission-webhook/0.log" Feb 18 02:02:17 crc kubenswrapper[4858]: I0218 02:02:17.205407 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-rfgvn_4752855a-6a66-4ba8-a484-00326c32d431/operator/0.log" Feb 18 02:02:18 crc kubenswrapper[4858]: I0218 02:02:18.259027 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-xmkpw_5d03f9d0-b687-4d66-9f89-297155cf2d51/perses-operator/0.log" Feb 18 02:02:19 crc kubenswrapper[4858]: I0218 02:02:19.419948 4858 scope.go:117] "RemoveContainer" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" Feb 18 02:02:19 crc kubenswrapper[4858]: E0218 02:02:19.420413 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 02:02:26 crc kubenswrapper[4858]: E0218 02:02:26.423423 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:02:28 crc kubenswrapper[4858]: E0218 02:02:28.421804 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:02:33 crc kubenswrapper[4858]: I0218 02:02:33.191817 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-vbx9l_650a8673-9066-448b-bab4-a90e9203dc70/kube-rbac-proxy/0.log" Feb 18 02:02:33 crc kubenswrapper[4858]: I0218 02:02:33.212967 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-vbx9l_650a8673-9066-448b-bab4-a90e9203dc70/controller/0.log" Feb 18 02:02:33 crc kubenswrapper[4858]: I0218 02:02:33.383987 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bdtxp_98aca645-8ef3-479a-9b7b-732ad5f24375/cp-frr-files/0.log" Feb 18 02:02:33 crc kubenswrapper[4858]: I0218 02:02:33.577993 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bdtxp_98aca645-8ef3-479a-9b7b-732ad5f24375/cp-reloader/0.log" Feb 18 02:02:33 crc kubenswrapper[4858]: I0218 02:02:33.582768 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bdtxp_98aca645-8ef3-479a-9b7b-732ad5f24375/cp-reloader/0.log" Feb 18 02:02:33 crc kubenswrapper[4858]: I0218 02:02:33.585922 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bdtxp_98aca645-8ef3-479a-9b7b-732ad5f24375/cp-frr-files/0.log" Feb 18 02:02:33 crc kubenswrapper[4858]: I0218 02:02:33.591252 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bdtxp_98aca645-8ef3-479a-9b7b-732ad5f24375/cp-metrics/0.log" Feb 18 02:02:33 crc kubenswrapper[4858]: I0218 02:02:33.752217 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bdtxp_98aca645-8ef3-479a-9b7b-732ad5f24375/cp-frr-files/0.log" Feb 18 02:02:33 crc kubenswrapper[4858]: I0218 02:02:33.768535 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bdtxp_98aca645-8ef3-479a-9b7b-732ad5f24375/cp-reloader/0.log" Feb 18 02:02:33 crc kubenswrapper[4858]: I0218 02:02:33.772916 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bdtxp_98aca645-8ef3-479a-9b7b-732ad5f24375/cp-metrics/0.log" Feb 18 02:02:33 crc kubenswrapper[4858]: I0218 02:02:33.819907 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bdtxp_98aca645-8ef3-479a-9b7b-732ad5f24375/cp-metrics/0.log" Feb 18 02:02:34 crc kubenswrapper[4858]: I0218 02:02:34.420124 4858 scope.go:117] "RemoveContainer" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" Feb 18 02:02:34 crc kubenswrapper[4858]: I0218 02:02:34.450136 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bdtxp_98aca645-8ef3-479a-9b7b-732ad5f24375/cp-reloader/0.log" Feb 18 02:02:34 crc kubenswrapper[4858]: I0218 02:02:34.480415 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bdtxp_98aca645-8ef3-479a-9b7b-732ad5f24375/cp-frr-files/0.log" Feb 18 02:02:34 crc kubenswrapper[4858]: I0218 02:02:34.508976 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bdtxp_98aca645-8ef3-479a-9b7b-732ad5f24375/cp-metrics/0.log" Feb 18 02:02:34 crc kubenswrapper[4858]: I0218 02:02:34.543860 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bdtxp_98aca645-8ef3-479a-9b7b-732ad5f24375/controller/0.log" Feb 18 02:02:34 crc kubenswrapper[4858]: I0218 02:02:34.793910 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bdtxp_98aca645-8ef3-479a-9b7b-732ad5f24375/frr-metrics/0.log" Feb 18 02:02:34 crc kubenswrapper[4858]: I0218 02:02:34.863487 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bdtxp_98aca645-8ef3-479a-9b7b-732ad5f24375/kube-rbac-proxy/0.log" Feb 18 02:02:34 crc kubenswrapper[4858]: I0218 02:02:34.865355 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bdtxp_98aca645-8ef3-479a-9b7b-732ad5f24375/kube-rbac-proxy-frr/0.log" Feb 18 02:02:35 crc kubenswrapper[4858]: I0218 02:02:35.031291 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bdtxp_98aca645-8ef3-479a-9b7b-732ad5f24375/reloader/0.log" Feb 18 02:02:35 crc kubenswrapper[4858]: I0218 02:02:35.083229 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-qv72m_83a08fae-fbbe-420a-a998-b8ecafd45b71/frr-k8s-webhook-server/0.log" Feb 18 02:02:35 crc kubenswrapper[4858]: I0218 02:02:35.324581 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5fdf7d4974-w5ljk_a459fc2d-abc9-40ac-9834-23438e1d8d3d/manager/0.log" Feb 18 02:02:35 crc kubenswrapper[4858]: I0218 02:02:35.447717 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-858d9bff6d-7w8qf_23f9d825-01d5-40a5-9999-8b72fbaee043/webhook-server/0.log" Feb 18 02:02:35 crc kubenswrapper[4858]: I0218 02:02:35.528841 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9xp4k_3dad204a-e97b-4be0-bc97-b3327c0eaef9/kube-rbac-proxy/0.log" Feb 18 02:02:35 crc kubenswrapper[4858]: I0218 02:02:35.639093 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerStarted","Data":"faf01201cfa461d475f5939c0b4356cca132162d62c13a283f57f29c608f2176"} Feb 18 02:02:36 crc kubenswrapper[4858]: I0218 02:02:36.283584 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bdtxp_98aca645-8ef3-479a-9b7b-732ad5f24375/frr/0.log" Feb 18 02:02:36 crc kubenswrapper[4858]: I0218 02:02:36.556478 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9xp4k_3dad204a-e97b-4be0-bc97-b3327c0eaef9/speaker/0.log" Feb 18 02:02:40 crc kubenswrapper[4858]: E0218 02:02:40.422944 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:02:40 crc kubenswrapper[4858]: E0218 02:02:40.422981 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:02:51 crc kubenswrapper[4858]: I0218 02:02:51.093512 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l_454c1998-5aac-4db1-a204-bbf491c27b13/util/0.log" Feb 18 02:02:51 crc kubenswrapper[4858]: I0218 02:02:51.370824 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l_454c1998-5aac-4db1-a204-bbf491c27b13/pull/0.log" Feb 18 02:02:51 crc kubenswrapper[4858]: I0218 02:02:51.394325 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l_454c1998-5aac-4db1-a204-bbf491c27b13/util/0.log" Feb 18 02:02:51 crc kubenswrapper[4858]: I0218 02:02:51.399514 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l_454c1998-5aac-4db1-a204-bbf491c27b13/pull/0.log" Feb 18 02:02:51 crc kubenswrapper[4858]: I0218 02:02:51.543746 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l_454c1998-5aac-4db1-a204-bbf491c27b13/util/0.log" Feb 18 02:02:51 crc kubenswrapper[4858]: I0218 02:02:51.574599 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l_454c1998-5aac-4db1-a204-bbf491c27b13/extract/0.log" Feb 18 02:02:51 crc kubenswrapper[4858]: I0218 02:02:51.587070 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651md29l_454c1998-5aac-4db1-a204-bbf491c27b13/pull/0.log" Feb 18 02:02:51 crc kubenswrapper[4858]: I0218 02:02:51.697927 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl_891969ef-ef73-4652-97d2-bc6a015fcdbd/util/0.log" Feb 18 02:02:51 crc kubenswrapper[4858]: I0218 02:02:51.881597 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl_891969ef-ef73-4652-97d2-bc6a015fcdbd/pull/0.log" Feb 18 02:02:51 crc kubenswrapper[4858]: I0218 02:02:51.884922 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl_891969ef-ef73-4652-97d2-bc6a015fcdbd/pull/0.log" Feb 18 02:02:51 crc kubenswrapper[4858]: I0218 02:02:51.910833 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl_891969ef-ef73-4652-97d2-bc6a015fcdbd/util/0.log" Feb 18 02:02:52 crc kubenswrapper[4858]: I0218 02:02:52.044372 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl_891969ef-ef73-4652-97d2-bc6a015fcdbd/extract/0.log" Feb 18 02:02:52 crc kubenswrapper[4858]: I0218 02:02:52.053995 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl_891969ef-ef73-4652-97d2-bc6a015fcdbd/util/0.log" Feb 18 02:02:52 crc kubenswrapper[4858]: I0218 02:02:52.098278 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gtrsl_891969ef-ef73-4652-97d2-bc6a015fcdbd/pull/0.log" Feb 18 02:02:52 crc kubenswrapper[4858]: I0218 02:02:52.219615 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh_f00b2490-8dc9-4640-924a-0d90a2bca37e/util/0.log" Feb 18 02:02:52 crc kubenswrapper[4858]: I0218 02:02:52.420525 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh_f00b2490-8dc9-4640-924a-0d90a2bca37e/util/0.log" Feb 18 02:02:52 crc kubenswrapper[4858]: I0218 02:02:52.425477 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh_f00b2490-8dc9-4640-924a-0d90a2bca37e/pull/0.log" Feb 18 02:02:52 crc kubenswrapper[4858]: I0218 02:02:52.469231 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh_f00b2490-8dc9-4640-924a-0d90a2bca37e/pull/0.log" Feb 18 02:02:52 crc kubenswrapper[4858]: I0218 02:02:52.596766 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh_f00b2490-8dc9-4640-924a-0d90a2bca37e/util/0.log" Feb 18 02:02:52 crc kubenswrapper[4858]: I0218 02:02:52.624882 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh_f00b2490-8dc9-4640-924a-0d90a2bca37e/pull/0.log" Feb 18 02:02:52 crc kubenswrapper[4858]: I0218 02:02:52.666464 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bvcwh_f00b2490-8dc9-4640-924a-0d90a2bca37e/extract/0.log" Feb 18 02:02:52 crc kubenswrapper[4858]: I0218 02:02:52.776782 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dmdlz_9ca922c7-2f96-4553-9d73-90ec93132ab0/extract-utilities/0.log" Feb 18 02:02:52 crc kubenswrapper[4858]: I0218 02:02:52.947471 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dmdlz_9ca922c7-2f96-4553-9d73-90ec93132ab0/extract-utilities/0.log" Feb 18 02:02:52 crc kubenswrapper[4858]: I0218 02:02:52.972888 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dmdlz_9ca922c7-2f96-4553-9d73-90ec93132ab0/extract-content/0.log" Feb 18 02:02:52 crc kubenswrapper[4858]: I0218 02:02:52.985763 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dmdlz_9ca922c7-2f96-4553-9d73-90ec93132ab0/extract-content/0.log" Feb 18 02:02:53 crc kubenswrapper[4858]: I0218 02:02:53.197551 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dmdlz_9ca922c7-2f96-4553-9d73-90ec93132ab0/extract-utilities/0.log" Feb 18 02:02:53 crc kubenswrapper[4858]: I0218 02:02:53.197614 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dmdlz_9ca922c7-2f96-4553-9d73-90ec93132ab0/extract-content/0.log" Feb 18 02:02:53 crc kubenswrapper[4858]: I0218 02:02:53.417382 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z2xwz_c65b0616-ca8a-47a9-8cd0-2527a88c4779/extract-utilities/0.log" Feb 18 02:02:53 crc kubenswrapper[4858]: E0218 02:02:53.424720 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:02:53 crc kubenswrapper[4858]: I0218 02:02:53.602397 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z2xwz_c65b0616-ca8a-47a9-8cd0-2527a88c4779/extract-utilities/0.log" Feb 18 02:02:53 crc kubenswrapper[4858]: I0218 02:02:53.643825 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z2xwz_c65b0616-ca8a-47a9-8cd0-2527a88c4779/extract-content/0.log" Feb 18 02:02:53 crc kubenswrapper[4858]: I0218 02:02:53.649787 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dmdlz_9ca922c7-2f96-4553-9d73-90ec93132ab0/registry-server/0.log" Feb 18 02:02:53 crc kubenswrapper[4858]: I0218 02:02:53.669566 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z2xwz_c65b0616-ca8a-47a9-8cd0-2527a88c4779/extract-content/0.log" Feb 18 02:02:53 crc kubenswrapper[4858]: I0218 02:02:53.848015 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z2xwz_c65b0616-ca8a-47a9-8cd0-2527a88c4779/extract-content/0.log" Feb 18 02:02:53 crc kubenswrapper[4858]: I0218 02:02:53.903932 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z2xwz_c65b0616-ca8a-47a9-8cd0-2527a88c4779/extract-utilities/0.log" Feb 18 02:02:54 crc kubenswrapper[4858]: I0218 02:02:54.047727 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z2xwz_c65b0616-ca8a-47a9-8cd0-2527a88c4779/registry-server/0.log" Feb 18 02:02:54 crc kubenswrapper[4858]: I0218 02:02:54.053440 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77_f984786a-760f-4fa7-91fb-6e1b447db492/util/0.log" Feb 18 02:02:54 crc kubenswrapper[4858]: I0218 02:02:54.227245 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77_f984786a-760f-4fa7-91fb-6e1b447db492/pull/0.log" Feb 18 02:02:54 crc kubenswrapper[4858]: I0218 02:02:54.245717 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77_f984786a-760f-4fa7-91fb-6e1b447db492/pull/0.log" Feb 18 02:02:54 crc kubenswrapper[4858]: I0218 02:02:54.262286 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77_f984786a-760f-4fa7-91fb-6e1b447db492/util/0.log" Feb 18 02:02:54 crc kubenswrapper[4858]: I0218 02:02:54.407673 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77_f984786a-760f-4fa7-91fb-6e1b447db492/pull/0.log" Feb 18 02:02:54 crc kubenswrapper[4858]: I0218 02:02:54.417820 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77_f984786a-760f-4fa7-91fb-6e1b447db492/extract/0.log" Feb 18 02:02:54 crc kubenswrapper[4858]: I0218 02:02:54.460090 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecazsg77_f984786a-760f-4fa7-91fb-6e1b447db492/util/0.log" Feb 18 02:02:54 crc kubenswrapper[4858]: I0218 02:02:54.569411 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-rlwbh_3627ca2b-bc95-444a-a999-b9413f6e1cc0/marketplace-operator/0.log" Feb 18 02:02:54 crc kubenswrapper[4858]: I0218 02:02:54.673054 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zmv6q_9e67a988-e2c1-433a-88de-286490057c27/extract-utilities/0.log" Feb 18 02:02:54 crc kubenswrapper[4858]: I0218 02:02:54.814428 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zmv6q_9e67a988-e2c1-433a-88de-286490057c27/extract-content/0.log" Feb 18 02:02:54 crc kubenswrapper[4858]: I0218 02:02:54.817662 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zmv6q_9e67a988-e2c1-433a-88de-286490057c27/extract-content/0.log" Feb 18 02:02:54 crc kubenswrapper[4858]: I0218 02:02:54.821102 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zmv6q_9e67a988-e2c1-433a-88de-286490057c27/extract-utilities/0.log" Feb 18 02:02:55 crc kubenswrapper[4858]: I0218 02:02:55.060443 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zmv6q_9e67a988-e2c1-433a-88de-286490057c27/extract-utilities/0.log" Feb 18 02:02:55 crc kubenswrapper[4858]: I0218 02:02:55.064610 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zmv6q_9e67a988-e2c1-433a-88de-286490057c27/extract-content/0.log" Feb 18 02:02:55 crc kubenswrapper[4858]: I0218 02:02:55.147168 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8qskx_b98459bf-9693-495a-ac0d-f46be8ea2df1/extract-utilities/0.log" Feb 18 02:02:55 crc kubenswrapper[4858]: I0218 02:02:55.284137 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zmv6q_9e67a988-e2c1-433a-88de-286490057c27/registry-server/0.log" Feb 18 02:02:55 crc kubenswrapper[4858]: I0218 02:02:55.319055 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8qskx_b98459bf-9693-495a-ac0d-f46be8ea2df1/extract-content/0.log" Feb 18 02:02:55 crc kubenswrapper[4858]: I0218 02:02:55.347999 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8qskx_b98459bf-9693-495a-ac0d-f46be8ea2df1/extract-utilities/0.log" Feb 18 02:02:55 crc kubenswrapper[4858]: I0218 02:02:55.349217 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8qskx_b98459bf-9693-495a-ac0d-f46be8ea2df1/extract-content/0.log" Feb 18 02:02:55 crc kubenswrapper[4858]: E0218 02:02:55.422126 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:02:55 crc kubenswrapper[4858]: I0218 02:02:55.592538 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8qskx_b98459bf-9693-495a-ac0d-f46be8ea2df1/extract-content/0.log" Feb 18 02:02:55 crc kubenswrapper[4858]: I0218 02:02:55.595041 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8qskx_b98459bf-9693-495a-ac0d-f46be8ea2df1/extract-utilities/0.log" Feb 18 02:02:56 crc kubenswrapper[4858]: I0218 02:02:56.174582 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8qskx_b98459bf-9693-495a-ac0d-f46be8ea2df1/registry-server/0.log" Feb 18 02:03:07 crc kubenswrapper[4858]: E0218 02:03:07.432823 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:03:09 crc kubenswrapper[4858]: E0218 02:03:09.422560 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:03:12 crc kubenswrapper[4858]: I0218 02:03:12.126981 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f598d5d6c-k7p5c_5d6270c6-d227-4243-b495-19306dfa376c/prometheus-operator-admission-webhook/0.log" Feb 18 02:03:12 crc kubenswrapper[4858]: I0218 02:03:12.161122 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-kmhxx_560a0ca4-78ca-406c-a540-51483acdb0f8/prometheus-operator/0.log" Feb 18 02:03:12 crc kubenswrapper[4858]: I0218 02:03:12.217903 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f598d5d6c-6mwxs_d28ad27c-eed0-473d-9257-1ea8f6c7291c/prometheus-operator-admission-webhook/0.log" Feb 18 02:03:12 crc kubenswrapper[4858]: I0218 02:03:12.351658 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-rfgvn_4752855a-6a66-4ba8-a484-00326c32d431/operator/0.log" Feb 18 02:03:12 crc kubenswrapper[4858]: I0218 02:03:12.361733 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-xmkpw_5d03f9d0-b687-4d66-9f89-297155cf2d51/perses-operator/0.log" Feb 18 02:03:13 crc kubenswrapper[4858]: I0218 02:03:13.453967 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tml4f"] Feb 18 02:03:13 crc kubenswrapper[4858]: E0218 02:03:13.454617 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec45d1bb-fc90-4839-a420-ca3f822bd158" containerName="keystone-cron" Feb 18 02:03:13 crc kubenswrapper[4858]: I0218 02:03:13.454636 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec45d1bb-fc90-4839-a420-ca3f822bd158" containerName="keystone-cron" Feb 18 02:03:13 crc kubenswrapper[4858]: I0218 02:03:13.454957 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec45d1bb-fc90-4839-a420-ca3f822bd158" containerName="keystone-cron" Feb 18 02:03:13 crc kubenswrapper[4858]: I0218 02:03:13.456827 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tml4f" Feb 18 02:03:13 crc kubenswrapper[4858]: I0218 02:03:13.469997 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tml4f"] Feb 18 02:03:13 crc kubenswrapper[4858]: I0218 02:03:13.517591 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24lbm\" (UniqueName: \"kubernetes.io/projected/a55c79d6-ea3b-427c-92ef-dace22b4a069-kube-api-access-24lbm\") pod \"certified-operators-tml4f\" (UID: \"a55c79d6-ea3b-427c-92ef-dace22b4a069\") " pod="openshift-marketplace/certified-operators-tml4f" Feb 18 02:03:13 crc kubenswrapper[4858]: I0218 02:03:13.517636 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a55c79d6-ea3b-427c-92ef-dace22b4a069-utilities\") pod \"certified-operators-tml4f\" (UID: \"a55c79d6-ea3b-427c-92ef-dace22b4a069\") " pod="openshift-marketplace/certified-operators-tml4f" Feb 18 02:03:13 crc kubenswrapper[4858]: I0218 02:03:13.517827 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a55c79d6-ea3b-427c-92ef-dace22b4a069-catalog-content\") pod \"certified-operators-tml4f\" (UID: \"a55c79d6-ea3b-427c-92ef-dace22b4a069\") " pod="openshift-marketplace/certified-operators-tml4f" Feb 18 02:03:13 crc kubenswrapper[4858]: I0218 02:03:13.619419 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24lbm\" (UniqueName: \"kubernetes.io/projected/a55c79d6-ea3b-427c-92ef-dace22b4a069-kube-api-access-24lbm\") pod \"certified-operators-tml4f\" (UID: \"a55c79d6-ea3b-427c-92ef-dace22b4a069\") " pod="openshift-marketplace/certified-operators-tml4f" Feb 18 02:03:13 crc kubenswrapper[4858]: I0218 02:03:13.619693 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a55c79d6-ea3b-427c-92ef-dace22b4a069-utilities\") pod \"certified-operators-tml4f\" (UID: \"a55c79d6-ea3b-427c-92ef-dace22b4a069\") " pod="openshift-marketplace/certified-operators-tml4f" Feb 18 02:03:13 crc kubenswrapper[4858]: I0218 02:03:13.619801 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a55c79d6-ea3b-427c-92ef-dace22b4a069-catalog-content\") pod \"certified-operators-tml4f\" (UID: \"a55c79d6-ea3b-427c-92ef-dace22b4a069\") " pod="openshift-marketplace/certified-operators-tml4f" Feb 18 02:03:13 crc kubenswrapper[4858]: I0218 02:03:13.620335 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a55c79d6-ea3b-427c-92ef-dace22b4a069-catalog-content\") pod \"certified-operators-tml4f\" (UID: \"a55c79d6-ea3b-427c-92ef-dace22b4a069\") " pod="openshift-marketplace/certified-operators-tml4f" Feb 18 02:03:13 crc kubenswrapper[4858]: I0218 02:03:13.620799 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a55c79d6-ea3b-427c-92ef-dace22b4a069-utilities\") pod \"certified-operators-tml4f\" (UID: \"a55c79d6-ea3b-427c-92ef-dace22b4a069\") " pod="openshift-marketplace/certified-operators-tml4f" Feb 18 02:03:13 crc kubenswrapper[4858]: I0218 02:03:13.638794 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24lbm\" (UniqueName: \"kubernetes.io/projected/a55c79d6-ea3b-427c-92ef-dace22b4a069-kube-api-access-24lbm\") pod \"certified-operators-tml4f\" (UID: \"a55c79d6-ea3b-427c-92ef-dace22b4a069\") " pod="openshift-marketplace/certified-operators-tml4f" Feb 18 02:03:13 crc kubenswrapper[4858]: I0218 02:03:13.792901 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tml4f" Feb 18 02:03:14 crc kubenswrapper[4858]: I0218 02:03:14.359256 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tml4f"] Feb 18 02:03:15 crc kubenswrapper[4858]: I0218 02:03:15.079004 4858 generic.go:334] "Generic (PLEG): container finished" podID="a55c79d6-ea3b-427c-92ef-dace22b4a069" containerID="ddd688897f58cf244bfefb257756fd32eb3b99693a7b9635e710811c5e3ca628" exitCode=0 Feb 18 02:03:15 crc kubenswrapper[4858]: I0218 02:03:15.079344 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tml4f" event={"ID":"a55c79d6-ea3b-427c-92ef-dace22b4a069","Type":"ContainerDied","Data":"ddd688897f58cf244bfefb257756fd32eb3b99693a7b9635e710811c5e3ca628"} Feb 18 02:03:15 crc kubenswrapper[4858]: I0218 02:03:15.079381 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tml4f" event={"ID":"a55c79d6-ea3b-427c-92ef-dace22b4a069","Type":"ContainerStarted","Data":"d4a97a008ae7754bfc408854f26b3f131b91cceb6901761a4cc60fe1e7a300d5"} Feb 18 02:03:16 crc kubenswrapper[4858]: I0218 02:03:16.088869 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tml4f" event={"ID":"a55c79d6-ea3b-427c-92ef-dace22b4a069","Type":"ContainerStarted","Data":"b1e2b174396b58fe919618fbe1b0df894a4d5c7848f2b59b723aa9b3ea06ad59"} Feb 18 02:03:17 crc kubenswrapper[4858]: I0218 02:03:17.099075 4858 generic.go:334] "Generic (PLEG): container finished" podID="a55c79d6-ea3b-427c-92ef-dace22b4a069" containerID="b1e2b174396b58fe919618fbe1b0df894a4d5c7848f2b59b723aa9b3ea06ad59" exitCode=0 Feb 18 02:03:17 crc kubenswrapper[4858]: I0218 02:03:17.099127 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tml4f" event={"ID":"a55c79d6-ea3b-427c-92ef-dace22b4a069","Type":"ContainerDied","Data":"b1e2b174396b58fe919618fbe1b0df894a4d5c7848f2b59b723aa9b3ea06ad59"} Feb 18 02:03:18 crc kubenswrapper[4858]: I0218 02:03:18.110037 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tml4f" event={"ID":"a55c79d6-ea3b-427c-92ef-dace22b4a069","Type":"ContainerStarted","Data":"ee3587a88cdb2570df8bed79673db1807fb6e0b1e197ca83ffac0b39297bf85f"} Feb 18 02:03:18 crc kubenswrapper[4858]: I0218 02:03:18.140185 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tml4f" podStartSLOduration=2.717407002 podStartE2EDuration="5.140162736s" podCreationTimestamp="2026-02-18 02:03:13 +0000 UTC" firstStartedPulling="2026-02-18 02:03:15.081893751 +0000 UTC m=+5348.387730523" lastFinishedPulling="2026-02-18 02:03:17.504649525 +0000 UTC m=+5350.810486257" observedRunningTime="2026-02-18 02:03:18.127481382 +0000 UTC m=+5351.433318114" watchObservedRunningTime="2026-02-18 02:03:18.140162736 +0000 UTC m=+5351.445999478" Feb 18 02:03:22 crc kubenswrapper[4858]: E0218 02:03:22.424291 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:03:22 crc kubenswrapper[4858]: E0218 02:03:22.424818 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:03:23 crc kubenswrapper[4858]: I0218 02:03:23.794130 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tml4f" Feb 18 02:03:23 crc kubenswrapper[4858]: I0218 02:03:23.794171 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tml4f" Feb 18 02:03:23 crc kubenswrapper[4858]: I0218 02:03:23.871157 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tml4f" Feb 18 02:03:24 crc kubenswrapper[4858]: I0218 02:03:24.221222 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tml4f" Feb 18 02:03:25 crc kubenswrapper[4858]: I0218 02:03:25.442968 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tml4f"] Feb 18 02:03:26 crc kubenswrapper[4858]: I0218 02:03:26.187007 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tml4f" podUID="a55c79d6-ea3b-427c-92ef-dace22b4a069" containerName="registry-server" containerID="cri-o://ee3587a88cdb2570df8bed79673db1807fb6e0b1e197ca83ffac0b39297bf85f" gracePeriod=2 Feb 18 02:03:26 crc kubenswrapper[4858]: I0218 02:03:26.774811 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tml4f" Feb 18 02:03:26 crc kubenswrapper[4858]: I0218 02:03:26.903044 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a55c79d6-ea3b-427c-92ef-dace22b4a069-catalog-content\") pod \"a55c79d6-ea3b-427c-92ef-dace22b4a069\" (UID: \"a55c79d6-ea3b-427c-92ef-dace22b4a069\") " Feb 18 02:03:26 crc kubenswrapper[4858]: I0218 02:03:26.903358 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24lbm\" (UniqueName: \"kubernetes.io/projected/a55c79d6-ea3b-427c-92ef-dace22b4a069-kube-api-access-24lbm\") pod \"a55c79d6-ea3b-427c-92ef-dace22b4a069\" (UID: \"a55c79d6-ea3b-427c-92ef-dace22b4a069\") " Feb 18 02:03:26 crc kubenswrapper[4858]: I0218 02:03:26.903611 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a55c79d6-ea3b-427c-92ef-dace22b4a069-utilities\") pod \"a55c79d6-ea3b-427c-92ef-dace22b4a069\" (UID: \"a55c79d6-ea3b-427c-92ef-dace22b4a069\") " Feb 18 02:03:26 crc kubenswrapper[4858]: I0218 02:03:26.904818 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a55c79d6-ea3b-427c-92ef-dace22b4a069-utilities" (OuterVolumeSpecName: "utilities") pod "a55c79d6-ea3b-427c-92ef-dace22b4a069" (UID: "a55c79d6-ea3b-427c-92ef-dace22b4a069"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 02:03:26 crc kubenswrapper[4858]: I0218 02:03:26.910338 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a55c79d6-ea3b-427c-92ef-dace22b4a069-kube-api-access-24lbm" (OuterVolumeSpecName: "kube-api-access-24lbm") pod "a55c79d6-ea3b-427c-92ef-dace22b4a069" (UID: "a55c79d6-ea3b-427c-92ef-dace22b4a069"). InnerVolumeSpecName "kube-api-access-24lbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 02:03:26 crc kubenswrapper[4858]: I0218 02:03:26.961358 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a55c79d6-ea3b-427c-92ef-dace22b4a069-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a55c79d6-ea3b-427c-92ef-dace22b4a069" (UID: "a55c79d6-ea3b-427c-92ef-dace22b4a069"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 02:03:27 crc kubenswrapper[4858]: I0218 02:03:27.005960 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a55c79d6-ea3b-427c-92ef-dace22b4a069-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 02:03:27 crc kubenswrapper[4858]: I0218 02:03:27.005988 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24lbm\" (UniqueName: \"kubernetes.io/projected/a55c79d6-ea3b-427c-92ef-dace22b4a069-kube-api-access-24lbm\") on node \"crc\" DevicePath \"\"" Feb 18 02:03:27 crc kubenswrapper[4858]: I0218 02:03:27.005998 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a55c79d6-ea3b-427c-92ef-dace22b4a069-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 02:03:27 crc kubenswrapper[4858]: I0218 02:03:27.198919 4858 generic.go:334] "Generic (PLEG): container finished" podID="a55c79d6-ea3b-427c-92ef-dace22b4a069" containerID="ee3587a88cdb2570df8bed79673db1807fb6e0b1e197ca83ffac0b39297bf85f" exitCode=0 Feb 18 02:03:27 crc kubenswrapper[4858]: I0218 02:03:27.198958 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tml4f" event={"ID":"a55c79d6-ea3b-427c-92ef-dace22b4a069","Type":"ContainerDied","Data":"ee3587a88cdb2570df8bed79673db1807fb6e0b1e197ca83ffac0b39297bf85f"} Feb 18 02:03:27 crc kubenswrapper[4858]: I0218 02:03:27.198982 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tml4f" event={"ID":"a55c79d6-ea3b-427c-92ef-dace22b4a069","Type":"ContainerDied","Data":"d4a97a008ae7754bfc408854f26b3f131b91cceb6901761a4cc60fe1e7a300d5"} Feb 18 02:03:27 crc kubenswrapper[4858]: I0218 02:03:27.198999 4858 scope.go:117] "RemoveContainer" containerID="ee3587a88cdb2570df8bed79673db1807fb6e0b1e197ca83ffac0b39297bf85f" Feb 18 02:03:27 crc kubenswrapper[4858]: I0218 02:03:27.199034 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tml4f" Feb 18 02:03:27 crc kubenswrapper[4858]: I0218 02:03:27.222601 4858 scope.go:117] "RemoveContainer" containerID="b1e2b174396b58fe919618fbe1b0df894a4d5c7848f2b59b723aa9b3ea06ad59" Feb 18 02:03:27 crc kubenswrapper[4858]: I0218 02:03:27.237655 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tml4f"] Feb 18 02:03:27 crc kubenswrapper[4858]: I0218 02:03:27.248303 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tml4f"] Feb 18 02:03:27 crc kubenswrapper[4858]: I0218 02:03:27.265364 4858 scope.go:117] "RemoveContainer" containerID="ddd688897f58cf244bfefb257756fd32eb3b99693a7b9635e710811c5e3ca628" Feb 18 02:03:27 crc kubenswrapper[4858]: I0218 02:03:27.428286 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a55c79d6-ea3b-427c-92ef-dace22b4a069" path="/var/lib/kubelet/pods/a55c79d6-ea3b-427c-92ef-dace22b4a069/volumes" Feb 18 02:03:27 crc kubenswrapper[4858]: I0218 02:03:27.873848 4858 scope.go:117] "RemoveContainer" containerID="ee3587a88cdb2570df8bed79673db1807fb6e0b1e197ca83ffac0b39297bf85f" Feb 18 02:03:27 crc kubenswrapper[4858]: E0218 02:03:27.874825 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee3587a88cdb2570df8bed79673db1807fb6e0b1e197ca83ffac0b39297bf85f\": container with ID starting with ee3587a88cdb2570df8bed79673db1807fb6e0b1e197ca83ffac0b39297bf85f not found: ID does not exist" containerID="ee3587a88cdb2570df8bed79673db1807fb6e0b1e197ca83ffac0b39297bf85f" Feb 18 02:03:27 crc kubenswrapper[4858]: I0218 02:03:27.874873 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee3587a88cdb2570df8bed79673db1807fb6e0b1e197ca83ffac0b39297bf85f"} err="failed to get container status \"ee3587a88cdb2570df8bed79673db1807fb6e0b1e197ca83ffac0b39297bf85f\": rpc error: code = NotFound desc = could not find container \"ee3587a88cdb2570df8bed79673db1807fb6e0b1e197ca83ffac0b39297bf85f\": container with ID starting with ee3587a88cdb2570df8bed79673db1807fb6e0b1e197ca83ffac0b39297bf85f not found: ID does not exist" Feb 18 02:03:27 crc kubenswrapper[4858]: I0218 02:03:27.874899 4858 scope.go:117] "RemoveContainer" containerID="b1e2b174396b58fe919618fbe1b0df894a4d5c7848f2b59b723aa9b3ea06ad59" Feb 18 02:03:27 crc kubenswrapper[4858]: E0218 02:03:27.875182 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1e2b174396b58fe919618fbe1b0df894a4d5c7848f2b59b723aa9b3ea06ad59\": container with ID starting with b1e2b174396b58fe919618fbe1b0df894a4d5c7848f2b59b723aa9b3ea06ad59 not found: ID does not exist" containerID="b1e2b174396b58fe919618fbe1b0df894a4d5c7848f2b59b723aa9b3ea06ad59" Feb 18 02:03:27 crc kubenswrapper[4858]: I0218 02:03:27.875207 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1e2b174396b58fe919618fbe1b0df894a4d5c7848f2b59b723aa9b3ea06ad59"} err="failed to get container status \"b1e2b174396b58fe919618fbe1b0df894a4d5c7848f2b59b723aa9b3ea06ad59\": rpc error: code = NotFound desc = could not find container \"b1e2b174396b58fe919618fbe1b0df894a4d5c7848f2b59b723aa9b3ea06ad59\": container with ID starting with b1e2b174396b58fe919618fbe1b0df894a4d5c7848f2b59b723aa9b3ea06ad59 not found: ID does not exist" Feb 18 02:03:27 crc kubenswrapper[4858]: I0218 02:03:27.875221 4858 scope.go:117] "RemoveContainer" containerID="ddd688897f58cf244bfefb257756fd32eb3b99693a7b9635e710811c5e3ca628" Feb 18 02:03:27 crc kubenswrapper[4858]: E0218 02:03:27.875523 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddd688897f58cf244bfefb257756fd32eb3b99693a7b9635e710811c5e3ca628\": container with ID starting with ddd688897f58cf244bfefb257756fd32eb3b99693a7b9635e710811c5e3ca628 not found: ID does not exist" containerID="ddd688897f58cf244bfefb257756fd32eb3b99693a7b9635e710811c5e3ca628" Feb 18 02:03:27 crc kubenswrapper[4858]: I0218 02:03:27.875546 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd688897f58cf244bfefb257756fd32eb3b99693a7b9635e710811c5e3ca628"} err="failed to get container status \"ddd688897f58cf244bfefb257756fd32eb3b99693a7b9635e710811c5e3ca628\": rpc error: code = NotFound desc = could not find container \"ddd688897f58cf244bfefb257756fd32eb3b99693a7b9635e710811c5e3ca628\": container with ID starting with ddd688897f58cf244bfefb257756fd32eb3b99693a7b9635e710811c5e3ca628 not found: ID does not exist" Feb 18 02:03:28 crc kubenswrapper[4858]: I0218 02:03:28.929892 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5c5fb49d49-cxcxg_b2422dde-b68b-41d0-acbf-2473c28f5177/kube-rbac-proxy/0.log" Feb 18 02:03:28 crc kubenswrapper[4858]: I0218 02:03:28.965763 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5c5fb49d49-cxcxg_b2422dde-b68b-41d0-acbf-2473c28f5177/manager/0.log" Feb 18 02:03:32 crc kubenswrapper[4858]: I0218 02:03:32.059389 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j8jz4"] Feb 18 02:03:32 crc kubenswrapper[4858]: E0218 02:03:32.060276 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a55c79d6-ea3b-427c-92ef-dace22b4a069" containerName="registry-server" Feb 18 02:03:32 crc kubenswrapper[4858]: I0218 02:03:32.060287 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a55c79d6-ea3b-427c-92ef-dace22b4a069" containerName="registry-server" Feb 18 02:03:32 crc kubenswrapper[4858]: E0218 02:03:32.060311 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a55c79d6-ea3b-427c-92ef-dace22b4a069" containerName="extract-utilities" Feb 18 02:03:32 crc kubenswrapper[4858]: I0218 02:03:32.060318 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a55c79d6-ea3b-427c-92ef-dace22b4a069" containerName="extract-utilities" Feb 18 02:03:32 crc kubenswrapper[4858]: E0218 02:03:32.060324 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a55c79d6-ea3b-427c-92ef-dace22b4a069" containerName="extract-content" Feb 18 02:03:32 crc kubenswrapper[4858]: I0218 02:03:32.060330 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a55c79d6-ea3b-427c-92ef-dace22b4a069" containerName="extract-content" Feb 18 02:03:32 crc kubenswrapper[4858]: I0218 02:03:32.060557 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a55c79d6-ea3b-427c-92ef-dace22b4a069" containerName="registry-server" Feb 18 02:03:32 crc kubenswrapper[4858]: I0218 02:03:32.061946 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j8jz4" Feb 18 02:03:32 crc kubenswrapper[4858]: I0218 02:03:32.071055 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j8jz4"] Feb 18 02:03:32 crc kubenswrapper[4858]: I0218 02:03:32.221929 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a428f397-a49d-449e-a39e-d1739ea4c0d8-utilities\") pod \"community-operators-j8jz4\" (UID: \"a428f397-a49d-449e-a39e-d1739ea4c0d8\") " pod="openshift-marketplace/community-operators-j8jz4" Feb 18 02:03:32 crc kubenswrapper[4858]: I0218 02:03:32.221982 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a428f397-a49d-449e-a39e-d1739ea4c0d8-catalog-content\") pod \"community-operators-j8jz4\" (UID: \"a428f397-a49d-449e-a39e-d1739ea4c0d8\") " pod="openshift-marketplace/community-operators-j8jz4" Feb 18 02:03:32 crc kubenswrapper[4858]: I0218 02:03:32.222117 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjqkq\" (UniqueName: \"kubernetes.io/projected/a428f397-a49d-449e-a39e-d1739ea4c0d8-kube-api-access-gjqkq\") pod \"community-operators-j8jz4\" (UID: \"a428f397-a49d-449e-a39e-d1739ea4c0d8\") " pod="openshift-marketplace/community-operators-j8jz4" Feb 18 02:03:32 crc kubenswrapper[4858]: I0218 02:03:32.324253 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a428f397-a49d-449e-a39e-d1739ea4c0d8-utilities\") pod \"community-operators-j8jz4\" (UID: \"a428f397-a49d-449e-a39e-d1739ea4c0d8\") " pod="openshift-marketplace/community-operators-j8jz4" Feb 18 02:03:32 crc kubenswrapper[4858]: I0218 02:03:32.324321 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a428f397-a49d-449e-a39e-d1739ea4c0d8-catalog-content\") pod \"community-operators-j8jz4\" (UID: \"a428f397-a49d-449e-a39e-d1739ea4c0d8\") " pod="openshift-marketplace/community-operators-j8jz4" Feb 18 02:03:32 crc kubenswrapper[4858]: I0218 02:03:32.324420 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjqkq\" (UniqueName: \"kubernetes.io/projected/a428f397-a49d-449e-a39e-d1739ea4c0d8-kube-api-access-gjqkq\") pod \"community-operators-j8jz4\" (UID: \"a428f397-a49d-449e-a39e-d1739ea4c0d8\") " pod="openshift-marketplace/community-operators-j8jz4" Feb 18 02:03:32 crc kubenswrapper[4858]: I0218 02:03:32.324826 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a428f397-a49d-449e-a39e-d1739ea4c0d8-catalog-content\") pod \"community-operators-j8jz4\" (UID: \"a428f397-a49d-449e-a39e-d1739ea4c0d8\") " pod="openshift-marketplace/community-operators-j8jz4" Feb 18 02:03:32 crc kubenswrapper[4858]: I0218 02:03:32.324981 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a428f397-a49d-449e-a39e-d1739ea4c0d8-utilities\") pod \"community-operators-j8jz4\" (UID: \"a428f397-a49d-449e-a39e-d1739ea4c0d8\") " pod="openshift-marketplace/community-operators-j8jz4" Feb 18 02:03:32 crc kubenswrapper[4858]: I0218 02:03:32.345401 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjqkq\" (UniqueName: \"kubernetes.io/projected/a428f397-a49d-449e-a39e-d1739ea4c0d8-kube-api-access-gjqkq\") pod \"community-operators-j8jz4\" (UID: \"a428f397-a49d-449e-a39e-d1739ea4c0d8\") " pod="openshift-marketplace/community-operators-j8jz4" Feb 18 02:03:32 crc kubenswrapper[4858]: I0218 02:03:32.388563 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j8jz4" Feb 18 02:03:32 crc kubenswrapper[4858]: I0218 02:03:32.977246 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j8jz4"] Feb 18 02:03:32 crc kubenswrapper[4858]: W0218 02:03:32.988202 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda428f397_a49d_449e_a39e_d1739ea4c0d8.slice/crio-0a988784e4cd5a62b0caf93fa5e26dcbbee2a94fe0c27ac1e711607985214c00 WatchSource:0}: Error finding container 0a988784e4cd5a62b0caf93fa5e26dcbbee2a94fe0c27ac1e711607985214c00: Status 404 returned error can't find the container with id 0a988784e4cd5a62b0caf93fa5e26dcbbee2a94fe0c27ac1e711607985214c00 Feb 18 02:03:33 crc kubenswrapper[4858]: I0218 02:03:33.250808 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8jz4" event={"ID":"a428f397-a49d-449e-a39e-d1739ea4c0d8","Type":"ContainerStarted","Data":"ef7eeaf201396aedaeb0ecb135421b8ae06d024b9e612f77204d57b910791e46"} Feb 18 02:03:33 crc kubenswrapper[4858]: I0218 02:03:33.251057 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8jz4" event={"ID":"a428f397-a49d-449e-a39e-d1739ea4c0d8","Type":"ContainerStarted","Data":"0a988784e4cd5a62b0caf93fa5e26dcbbee2a94fe0c27ac1e711607985214c00"} Feb 18 02:03:33 crc kubenswrapper[4858]: E0218 02:03:33.421629 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:03:34 crc kubenswrapper[4858]: I0218 02:03:34.276315 4858 generic.go:334] "Generic (PLEG): container finished" podID="a428f397-a49d-449e-a39e-d1739ea4c0d8" containerID="ef7eeaf201396aedaeb0ecb135421b8ae06d024b9e612f77204d57b910791e46" exitCode=0 Feb 18 02:03:34 crc kubenswrapper[4858]: I0218 02:03:34.276695 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8jz4" event={"ID":"a428f397-a49d-449e-a39e-d1739ea4c0d8","Type":"ContainerDied","Data":"ef7eeaf201396aedaeb0ecb135421b8ae06d024b9e612f77204d57b910791e46"} Feb 18 02:03:36 crc kubenswrapper[4858]: I0218 02:03:36.299783 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8jz4" event={"ID":"a428f397-a49d-449e-a39e-d1739ea4c0d8","Type":"ContainerStarted","Data":"92f80cc673e76029ba6cbd3ea3e534c1974573ed1da57f4d1a9294d9a3cf01d7"} Feb 18 02:03:37 crc kubenswrapper[4858]: E0218 02:03:37.426941 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:03:38 crc kubenswrapper[4858]: I0218 02:03:38.321200 4858 generic.go:334] "Generic (PLEG): container finished" podID="a428f397-a49d-449e-a39e-d1739ea4c0d8" containerID="92f80cc673e76029ba6cbd3ea3e534c1974573ed1da57f4d1a9294d9a3cf01d7" exitCode=0 Feb 18 02:03:38 crc kubenswrapper[4858]: I0218 02:03:38.321270 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8jz4" event={"ID":"a428f397-a49d-449e-a39e-d1739ea4c0d8","Type":"ContainerDied","Data":"92f80cc673e76029ba6cbd3ea3e534c1974573ed1da57f4d1a9294d9a3cf01d7"} Feb 18 02:03:39 crc kubenswrapper[4858]: I0218 02:03:39.332838 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8jz4" event={"ID":"a428f397-a49d-449e-a39e-d1739ea4c0d8","Type":"ContainerStarted","Data":"9c8b69ab53e702784eb3dc2d34670570cc4f7a67272f082a0a2d58c9b7317546"} Feb 18 02:03:39 crc kubenswrapper[4858]: I0218 02:03:39.363999 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j8jz4" podStartSLOduration=2.924448164 podStartE2EDuration="7.363981785s" podCreationTimestamp="2026-02-18 02:03:32 +0000 UTC" firstStartedPulling="2026-02-18 02:03:34.282329085 +0000 UTC m=+5367.588165817" lastFinishedPulling="2026-02-18 02:03:38.721862706 +0000 UTC m=+5372.027699438" observedRunningTime="2026-02-18 02:03:39.357780177 +0000 UTC m=+5372.663616909" watchObservedRunningTime="2026-02-18 02:03:39.363981785 +0000 UTC m=+5372.669818517" Feb 18 02:03:42 crc kubenswrapper[4858]: I0218 02:03:42.389385 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j8jz4" Feb 18 02:03:42 crc kubenswrapper[4858]: I0218 02:03:42.389859 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j8jz4" Feb 18 02:03:42 crc kubenswrapper[4858]: I0218 02:03:42.437656 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j8jz4" Feb 18 02:03:43 crc kubenswrapper[4858]: I0218 02:03:43.458701 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j8jz4" Feb 18 02:03:43 crc kubenswrapper[4858]: I0218 02:03:43.676755 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j8jz4"] Feb 18 02:03:45 crc kubenswrapper[4858]: I0218 02:03:45.398584 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j8jz4" podUID="a428f397-a49d-449e-a39e-d1739ea4c0d8" containerName="registry-server" containerID="cri-o://9c8b69ab53e702784eb3dc2d34670570cc4f7a67272f082a0a2d58c9b7317546" gracePeriod=2 Feb 18 02:03:46 crc kubenswrapper[4858]: I0218 02:03:46.467810 4858 generic.go:334] "Generic (PLEG): container finished" podID="a428f397-a49d-449e-a39e-d1739ea4c0d8" containerID="9c8b69ab53e702784eb3dc2d34670570cc4f7a67272f082a0a2d58c9b7317546" exitCode=0 Feb 18 02:03:46 crc kubenswrapper[4858]: I0218 02:03:46.468454 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8jz4" event={"ID":"a428f397-a49d-449e-a39e-d1739ea4c0d8","Type":"ContainerDied","Data":"9c8b69ab53e702784eb3dc2d34670570cc4f7a67272f082a0a2d58c9b7317546"} Feb 18 02:03:46 crc kubenswrapper[4858]: I0218 02:03:46.745395 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j8jz4" Feb 18 02:03:46 crc kubenswrapper[4858]: I0218 02:03:46.839252 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjqkq\" (UniqueName: \"kubernetes.io/projected/a428f397-a49d-449e-a39e-d1739ea4c0d8-kube-api-access-gjqkq\") pod \"a428f397-a49d-449e-a39e-d1739ea4c0d8\" (UID: \"a428f397-a49d-449e-a39e-d1739ea4c0d8\") " Feb 18 02:03:46 crc kubenswrapper[4858]: I0218 02:03:46.839305 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a428f397-a49d-449e-a39e-d1739ea4c0d8-catalog-content\") pod \"a428f397-a49d-449e-a39e-d1739ea4c0d8\" (UID: \"a428f397-a49d-449e-a39e-d1739ea4c0d8\") " Feb 18 02:03:46 crc kubenswrapper[4858]: I0218 02:03:46.839332 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a428f397-a49d-449e-a39e-d1739ea4c0d8-utilities\") pod \"a428f397-a49d-449e-a39e-d1739ea4c0d8\" (UID: \"a428f397-a49d-449e-a39e-d1739ea4c0d8\") " Feb 18 02:03:46 crc kubenswrapper[4858]: I0218 02:03:46.840437 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a428f397-a49d-449e-a39e-d1739ea4c0d8-utilities" (OuterVolumeSpecName: "utilities") pod "a428f397-a49d-449e-a39e-d1739ea4c0d8" (UID: "a428f397-a49d-449e-a39e-d1739ea4c0d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 02:03:46 crc kubenswrapper[4858]: I0218 02:03:46.852533 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a428f397-a49d-449e-a39e-d1739ea4c0d8-kube-api-access-gjqkq" (OuterVolumeSpecName: "kube-api-access-gjqkq") pod "a428f397-a49d-449e-a39e-d1739ea4c0d8" (UID: "a428f397-a49d-449e-a39e-d1739ea4c0d8"). InnerVolumeSpecName "kube-api-access-gjqkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 02:03:46 crc kubenswrapper[4858]: I0218 02:03:46.941333 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjqkq\" (UniqueName: \"kubernetes.io/projected/a428f397-a49d-449e-a39e-d1739ea4c0d8-kube-api-access-gjqkq\") on node \"crc\" DevicePath \"\"" Feb 18 02:03:46 crc kubenswrapper[4858]: I0218 02:03:46.941368 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a428f397-a49d-449e-a39e-d1739ea4c0d8-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 02:03:46 crc kubenswrapper[4858]: I0218 02:03:46.941815 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a428f397-a49d-449e-a39e-d1739ea4c0d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a428f397-a49d-449e-a39e-d1739ea4c0d8" (UID: "a428f397-a49d-449e-a39e-d1739ea4c0d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 02:03:47 crc kubenswrapper[4858]: I0218 02:03:47.046434 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a428f397-a49d-449e-a39e-d1739ea4c0d8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 02:03:47 crc kubenswrapper[4858]: E0218 02:03:47.428819 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:03:47 crc kubenswrapper[4858]: I0218 02:03:47.499248 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8jz4" event={"ID":"a428f397-a49d-449e-a39e-d1739ea4c0d8","Type":"ContainerDied","Data":"0a988784e4cd5a62b0caf93fa5e26dcbbee2a94fe0c27ac1e711607985214c00"} Feb 18 02:03:47 crc kubenswrapper[4858]: I0218 02:03:47.499319 4858 scope.go:117] "RemoveContainer" containerID="9c8b69ab53e702784eb3dc2d34670570cc4f7a67272f082a0a2d58c9b7317546" Feb 18 02:03:47 crc kubenswrapper[4858]: I0218 02:03:47.499394 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j8jz4" Feb 18 02:03:47 crc kubenswrapper[4858]: E0218 02:03:47.522182 4858 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.12:34748->38.102.83.12:33927: read tcp 38.102.83.12:34748->38.102.83.12:33927: read: connection reset by peer Feb 18 02:03:47 crc kubenswrapper[4858]: I0218 02:03:47.535539 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j8jz4"] Feb 18 02:03:47 crc kubenswrapper[4858]: I0218 02:03:47.545471 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j8jz4"] Feb 18 02:03:47 crc kubenswrapper[4858]: I0218 02:03:47.548801 4858 scope.go:117] "RemoveContainer" containerID="92f80cc673e76029ba6cbd3ea3e534c1974573ed1da57f4d1a9294d9a3cf01d7" Feb 18 02:03:47 crc kubenswrapper[4858]: I0218 02:03:47.575818 4858 scope.go:117] "RemoveContainer" containerID="ef7eeaf201396aedaeb0ecb135421b8ae06d024b9e612f77204d57b910791e46" Feb 18 02:03:49 crc kubenswrapper[4858]: I0218 02:03:49.442647 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a428f397-a49d-449e-a39e-d1739ea4c0d8" path="/var/lib/kubelet/pods/a428f397-a49d-449e-a39e-d1739ea4c0d8/volumes" Feb 18 02:03:50 crc kubenswrapper[4858]: E0218 02:03:50.422141 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:04:02 crc kubenswrapper[4858]: E0218 02:04:02.424254 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:04:02 crc kubenswrapper[4858]: I0218 02:04:02.424860 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 02:04:02 crc kubenswrapper[4858]: E0218 02:04:02.521086 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 02:04:02 crc kubenswrapper[4858]: E0218 02:04:02.521153 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 02:04:02 crc kubenswrapper[4858]: E0218 02:04:02.521287 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t22t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-h2mps_openstack(8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 02:04:02 crc kubenswrapper[4858]: E0218 02:04:02.522598 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:04:06 crc kubenswrapper[4858]: I0218 02:04:06.818440 4858 trace.go:236] Trace[1444720018]: "Calculate volume metrics of storage for pod minio-dev/minio" (18-Feb-2026 02:04:05.214) (total time: 1604ms): Feb 18 02:04:06 crc kubenswrapper[4858]: Trace[1444720018]: [1.604371526s] [1.604371526s] END Feb 18 02:04:13 crc kubenswrapper[4858]: E0218 02:04:13.421912 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:04:15 crc kubenswrapper[4858]: E0218 02:04:15.554163 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 02:04:15 crc kubenswrapper[4858]: E0218 02:04:15.554476 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 02:04:15 crc kubenswrapper[4858]: E0218 02:04:15.554617 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n75h65dh56ch8chbh597h687h696h57bhdchcbhb6h9h648hb6h686h666h57ch557h55ch68ch76h686h5f7hb7hc6h68hdh67fh8bhd7hbcq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6qb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(1b28954c-8d35-4f43-a44b-307a56f6fff5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 02:04:15 crc kubenswrapper[4858]: E0218 02:04:15.555805 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:04:27 crc kubenswrapper[4858]: E0218 02:04:27.442546 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:04:27 crc kubenswrapper[4858]: E0218 02:04:27.442613 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:04:39 crc kubenswrapper[4858]: E0218 02:04:39.422684 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:04:40 crc kubenswrapper[4858]: E0218 02:04:40.420428 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:04:51 crc kubenswrapper[4858]: E0218 02:04:51.424443 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:04:54 crc kubenswrapper[4858]: E0218 02:04:54.423687 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:04:55 crc kubenswrapper[4858]: I0218 02:04:55.265251 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 02:04:55 crc kubenswrapper[4858]: I0218 02:04:55.265657 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 02:05:03 crc kubenswrapper[4858]: E0218 02:05:03.424254 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:05:08 crc kubenswrapper[4858]: E0218 02:05:08.423962 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:05:14 crc kubenswrapper[4858]: I0218 02:05:14.422468 4858 generic.go:334] "Generic (PLEG): container finished" podID="f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3" containerID="5d5310471c2a091ce4142f9c46e17925bfa7a15dc653ce04acd974470e5fc9c6" exitCode=0 Feb 18 02:05:14 crc kubenswrapper[4858]: I0218 02:05:14.422612 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ngw6p/must-gather-w2lzv" event={"ID":"f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3","Type":"ContainerDied","Data":"5d5310471c2a091ce4142f9c46e17925bfa7a15dc653ce04acd974470e5fc9c6"} Feb 18 02:05:14 crc kubenswrapper[4858]: I0218 02:05:14.423919 4858 scope.go:117] "RemoveContainer" containerID="5d5310471c2a091ce4142f9c46e17925bfa7a15dc653ce04acd974470e5fc9c6" Feb 18 02:05:15 crc kubenswrapper[4858]: I0218 02:05:15.010917 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ngw6p_must-gather-w2lzv_f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3/gather/0.log" Feb 18 02:05:17 crc kubenswrapper[4858]: E0218 02:05:17.432835 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:05:19 crc kubenswrapper[4858]: E0218 02:05:19.422154 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:05:22 crc kubenswrapper[4858]: I0218 02:05:22.364391 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ngw6p/must-gather-w2lzv"] Feb 18 02:05:22 crc kubenswrapper[4858]: I0218 02:05:22.365141 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-ngw6p/must-gather-w2lzv" podUID="f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3" containerName="copy" containerID="cri-o://aa409c3bfdd51672b3e3c976a0b811ae2f708cc711e3de84e714fc1903da5671" gracePeriod=2 Feb 18 02:05:22 crc kubenswrapper[4858]: I0218 02:05:22.376621 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ngw6p/must-gather-w2lzv"] Feb 18 02:05:22 crc kubenswrapper[4858]: I0218 02:05:22.522140 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ngw6p_must-gather-w2lzv_f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3/copy/0.log" Feb 18 02:05:22 crc kubenswrapper[4858]: I0218 02:05:22.522524 4858 generic.go:334] "Generic (PLEG): container finished" podID="f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3" containerID="aa409c3bfdd51672b3e3c976a0b811ae2f708cc711e3de84e714fc1903da5671" exitCode=143 Feb 18 02:05:22 crc kubenswrapper[4858]: I0218 02:05:22.826534 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ngw6p_must-gather-w2lzv_f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3/copy/0.log" Feb 18 02:05:22 crc kubenswrapper[4858]: I0218 02:05:22.827170 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngw6p/must-gather-w2lzv" Feb 18 02:05:22 crc kubenswrapper[4858]: I0218 02:05:22.936989 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3-must-gather-output\") pod \"f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3\" (UID: \"f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3\") " Feb 18 02:05:22 crc kubenswrapper[4858]: I0218 02:05:22.937377 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2b8m\" (UniqueName: \"kubernetes.io/projected/f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3-kube-api-access-c2b8m\") pod \"f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3\" (UID: \"f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3\") " Feb 18 02:05:22 crc kubenswrapper[4858]: I0218 02:05:22.942635 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3-kube-api-access-c2b8m" (OuterVolumeSpecName: "kube-api-access-c2b8m") pod "f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3" (UID: "f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3"). InnerVolumeSpecName "kube-api-access-c2b8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 02:05:23 crc kubenswrapper[4858]: I0218 02:05:23.040402 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2b8m\" (UniqueName: \"kubernetes.io/projected/f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3-kube-api-access-c2b8m\") on node \"crc\" DevicePath \"\"" Feb 18 02:05:23 crc kubenswrapper[4858]: I0218 02:05:23.099731 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3" (UID: "f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 02:05:23 crc kubenswrapper[4858]: I0218 02:05:23.142377 4858 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 18 02:05:23 crc kubenswrapper[4858]: I0218 02:05:23.431885 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3" path="/var/lib/kubelet/pods/f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3/volumes" Feb 18 02:05:23 crc kubenswrapper[4858]: I0218 02:05:23.531874 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ngw6p_must-gather-w2lzv_f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3/copy/0.log" Feb 18 02:05:23 crc kubenswrapper[4858]: I0218 02:05:23.532522 4858 scope.go:117] "RemoveContainer" containerID="aa409c3bfdd51672b3e3c976a0b811ae2f708cc711e3de84e714fc1903da5671" Feb 18 02:05:23 crc kubenswrapper[4858]: I0218 02:05:23.532548 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ngw6p/must-gather-w2lzv" Feb 18 02:05:23 crc kubenswrapper[4858]: I0218 02:05:23.556222 4858 scope.go:117] "RemoveContainer" containerID="5d5310471c2a091ce4142f9c46e17925bfa7a15dc653ce04acd974470e5fc9c6" Feb 18 02:05:25 crc kubenswrapper[4858]: I0218 02:05:25.265345 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 02:05:25 crc kubenswrapper[4858]: I0218 02:05:25.265891 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 02:05:29 crc kubenswrapper[4858]: E0218 02:05:29.421585 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:05:31 crc kubenswrapper[4858]: E0218 02:05:31.426831 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:05:44 crc kubenswrapper[4858]: E0218 02:05:44.423208 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:05:45 crc kubenswrapper[4858]: E0218 02:05:45.422309 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:05:55 crc kubenswrapper[4858]: I0218 02:05:55.265335 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 02:05:55 crc kubenswrapper[4858]: I0218 02:05:55.266142 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 02:05:55 crc kubenswrapper[4858]: I0218 02:05:55.266241 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 02:05:55 crc kubenswrapper[4858]: I0218 02:05:55.267544 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"faf01201cfa461d475f5939c0b4356cca132162d62c13a283f57f29c608f2176"} pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 02:05:55 crc kubenswrapper[4858]: I0218 02:05:55.267662 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" containerID="cri-o://faf01201cfa461d475f5939c0b4356cca132162d62c13a283f57f29c608f2176" gracePeriod=600 Feb 18 02:05:56 crc kubenswrapper[4858]: I0218 02:05:56.904850 4858 generic.go:334] "Generic (PLEG): container finished" podID="7172df49-6116-4968-a2b5-a1afb116568b" containerID="faf01201cfa461d475f5939c0b4356cca132162d62c13a283f57f29c608f2176" exitCode=0 Feb 18 02:05:56 crc kubenswrapper[4858]: I0218 02:05:56.905391 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerDied","Data":"faf01201cfa461d475f5939c0b4356cca132162d62c13a283f57f29c608f2176"} Feb 18 02:05:56 crc kubenswrapper[4858]: I0218 02:05:56.905424 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerStarted","Data":"ab9e2b13eeb887b40c3392f885b6d152add630b1aaaa81b85a81edd0399edc9f"} Feb 18 02:05:56 crc kubenswrapper[4858]: I0218 02:05:56.905454 4858 scope.go:117] "RemoveContainer" containerID="6153f3502525c73d811b3059716f660cf2ed2b46a77eb5efc0ffdf4168a5ebc4" Feb 18 02:05:57 crc kubenswrapper[4858]: E0218 02:05:57.435386 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:05:58 crc kubenswrapper[4858]: E0218 02:05:58.422224 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:06:09 crc kubenswrapper[4858]: E0218 02:06:09.422131 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:06:11 crc kubenswrapper[4858]: E0218 02:06:11.424127 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:06:22 crc kubenswrapper[4858]: E0218 02:06:22.422825 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:06:23 crc kubenswrapper[4858]: E0218 02:06:23.422593 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:06:33 crc kubenswrapper[4858]: E0218 02:06:33.423228 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:06:37 crc kubenswrapper[4858]: E0218 02:06:37.436937 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:06:45 crc kubenswrapper[4858]: E0218 02:06:45.426244 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:06:50 crc kubenswrapper[4858]: E0218 02:06:50.422893 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:06:54 crc kubenswrapper[4858]: I0218 02:06:54.760997 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="acb8b920-9bb7-42b7-8bf7-e8f6b5880654" containerName="galera" probeResult="failure" output="command timed out" Feb 18 02:06:54 crc kubenswrapper[4858]: I0218 02:06:54.768311 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="acb8b920-9bb7-42b7-8bf7-e8f6b5880654" containerName="galera" probeResult="failure" output="command timed out" Feb 18 02:06:58 crc kubenswrapper[4858]: E0218 02:06:58.421108 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:07:02 crc kubenswrapper[4858]: E0218 02:07:02.423091 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:07:11 crc kubenswrapper[4858]: E0218 02:07:11.422171 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:07:15 crc kubenswrapper[4858]: E0218 02:07:15.424135 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:07:26 crc kubenswrapper[4858]: E0218 02:07:26.421357 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:07:29 crc kubenswrapper[4858]: E0218 02:07:29.422535 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:07:37 crc kubenswrapper[4858]: E0218 02:07:37.433180 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:07:42 crc kubenswrapper[4858]: E0218 02:07:42.423811 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:07:50 crc kubenswrapper[4858]: E0218 02:07:50.422202 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:07:55 crc kubenswrapper[4858]: E0218 02:07:55.420883 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:08:03 crc kubenswrapper[4858]: E0218 02:08:03.423702 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:08:07 crc kubenswrapper[4858]: E0218 02:08:07.443277 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:08:17 crc kubenswrapper[4858]: E0218 02:08:17.422051 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:08:19 crc kubenswrapper[4858]: E0218 02:08:19.425307 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:08:25 crc kubenswrapper[4858]: I0218 02:08:25.265317 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 02:08:25 crc kubenswrapper[4858]: I0218 02:08:25.266027 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 02:08:29 crc kubenswrapper[4858]: E0218 02:08:29.422231 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:08:30 crc kubenswrapper[4858]: I0218 02:08:30.669067 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fqtkg"] Feb 18 02:08:30 crc kubenswrapper[4858]: E0218 02:08:30.670467 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a428f397-a49d-449e-a39e-d1739ea4c0d8" containerName="extract-utilities" Feb 18 02:08:30 crc kubenswrapper[4858]: I0218 02:08:30.670491 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a428f397-a49d-449e-a39e-d1739ea4c0d8" containerName="extract-utilities" Feb 18 02:08:30 crc kubenswrapper[4858]: E0218 02:08:30.670525 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3" containerName="copy" Feb 18 02:08:30 crc kubenswrapper[4858]: I0218 02:08:30.670534 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3" containerName="copy" Feb 18 02:08:30 crc kubenswrapper[4858]: E0218 02:08:30.670567 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3" containerName="gather" Feb 18 02:08:30 crc kubenswrapper[4858]: I0218 02:08:30.670576 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3" containerName="gather" Feb 18 02:08:30 crc kubenswrapper[4858]: E0218 02:08:30.670593 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a428f397-a49d-449e-a39e-d1739ea4c0d8" containerName="extract-content" Feb 18 02:08:30 crc kubenswrapper[4858]: I0218 02:08:30.670768 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a428f397-a49d-449e-a39e-d1739ea4c0d8" containerName="extract-content" Feb 18 02:08:30 crc kubenswrapper[4858]: E0218 02:08:30.670807 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a428f397-a49d-449e-a39e-d1739ea4c0d8" containerName="registry-server" Feb 18 02:08:30 crc kubenswrapper[4858]: I0218 02:08:30.670818 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a428f397-a49d-449e-a39e-d1739ea4c0d8" containerName="registry-server" Feb 18 02:08:30 crc kubenswrapper[4858]: I0218 02:08:30.671111 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a428f397-a49d-449e-a39e-d1739ea4c0d8" containerName="registry-server" Feb 18 02:08:30 crc kubenswrapper[4858]: I0218 02:08:30.671138 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3" containerName="copy" Feb 18 02:08:30 crc kubenswrapper[4858]: I0218 02:08:30.671153 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f097f3ff-e18c-4f92-a4f1-3e6cf8e548f3" containerName="gather" Feb 18 02:08:30 crc kubenswrapper[4858]: I0218 02:08:30.675574 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fqtkg" Feb 18 02:08:30 crc kubenswrapper[4858]: I0218 02:08:30.708939 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fqtkg"] Feb 18 02:08:30 crc kubenswrapper[4858]: I0218 02:08:30.803708 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87bd4688-7b81-4b36-b193-ecdeb8b8e7b1-catalog-content\") pod \"redhat-marketplace-fqtkg\" (UID: \"87bd4688-7b81-4b36-b193-ecdeb8b8e7b1\") " pod="openshift-marketplace/redhat-marketplace-fqtkg" Feb 18 02:08:30 crc kubenswrapper[4858]: I0218 02:08:30.803772 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szc25\" (UniqueName: \"kubernetes.io/projected/87bd4688-7b81-4b36-b193-ecdeb8b8e7b1-kube-api-access-szc25\") pod \"redhat-marketplace-fqtkg\" (UID: \"87bd4688-7b81-4b36-b193-ecdeb8b8e7b1\") " pod="openshift-marketplace/redhat-marketplace-fqtkg" Feb 18 02:08:30 crc kubenswrapper[4858]: I0218 02:08:30.804013 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87bd4688-7b81-4b36-b193-ecdeb8b8e7b1-utilities\") pod \"redhat-marketplace-fqtkg\" (UID: \"87bd4688-7b81-4b36-b193-ecdeb8b8e7b1\") " pod="openshift-marketplace/redhat-marketplace-fqtkg" Feb 18 02:08:30 crc kubenswrapper[4858]: I0218 02:08:30.906654 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87bd4688-7b81-4b36-b193-ecdeb8b8e7b1-catalog-content\") pod \"redhat-marketplace-fqtkg\" (UID: \"87bd4688-7b81-4b36-b193-ecdeb8b8e7b1\") " pod="openshift-marketplace/redhat-marketplace-fqtkg" Feb 18 02:08:30 crc kubenswrapper[4858]: I0218 02:08:30.906711 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szc25\" (UniqueName: \"kubernetes.io/projected/87bd4688-7b81-4b36-b193-ecdeb8b8e7b1-kube-api-access-szc25\") pod \"redhat-marketplace-fqtkg\" (UID: \"87bd4688-7b81-4b36-b193-ecdeb8b8e7b1\") " pod="openshift-marketplace/redhat-marketplace-fqtkg" Feb 18 02:08:30 crc kubenswrapper[4858]: I0218 02:08:30.906758 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87bd4688-7b81-4b36-b193-ecdeb8b8e7b1-utilities\") pod \"redhat-marketplace-fqtkg\" (UID: \"87bd4688-7b81-4b36-b193-ecdeb8b8e7b1\") " pod="openshift-marketplace/redhat-marketplace-fqtkg" Feb 18 02:08:30 crc kubenswrapper[4858]: I0218 02:08:30.907260 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87bd4688-7b81-4b36-b193-ecdeb8b8e7b1-catalog-content\") pod \"redhat-marketplace-fqtkg\" (UID: \"87bd4688-7b81-4b36-b193-ecdeb8b8e7b1\") " pod="openshift-marketplace/redhat-marketplace-fqtkg" Feb 18 02:08:30 crc kubenswrapper[4858]: I0218 02:08:30.907291 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87bd4688-7b81-4b36-b193-ecdeb8b8e7b1-utilities\") pod \"redhat-marketplace-fqtkg\" (UID: \"87bd4688-7b81-4b36-b193-ecdeb8b8e7b1\") " pod="openshift-marketplace/redhat-marketplace-fqtkg" Feb 18 02:08:30 crc kubenswrapper[4858]: I0218 02:08:30.926599 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szc25\" (UniqueName: \"kubernetes.io/projected/87bd4688-7b81-4b36-b193-ecdeb8b8e7b1-kube-api-access-szc25\") pod \"redhat-marketplace-fqtkg\" (UID: \"87bd4688-7b81-4b36-b193-ecdeb8b8e7b1\") " pod="openshift-marketplace/redhat-marketplace-fqtkg" Feb 18 02:08:31 crc kubenswrapper[4858]: I0218 02:08:31.007786 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fqtkg" Feb 18 02:08:31 crc kubenswrapper[4858]: I0218 02:08:31.580035 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fqtkg"] Feb 18 02:08:31 crc kubenswrapper[4858]: I0218 02:08:31.692351 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fqtkg" event={"ID":"87bd4688-7b81-4b36-b193-ecdeb8b8e7b1","Type":"ContainerStarted","Data":"193408068b23c0f180a66b3391db99132f75d2cb7c5cbdd1e8655c6b857a445f"} Feb 18 02:08:32 crc kubenswrapper[4858]: I0218 02:08:32.707882 4858 generic.go:334] "Generic (PLEG): container finished" podID="87bd4688-7b81-4b36-b193-ecdeb8b8e7b1" containerID="1f5f12e9fc5d2b562d60ce0c35f07a35073e3d9270b91ca284e0375203fd851e" exitCode=0 Feb 18 02:08:32 crc kubenswrapper[4858]: I0218 02:08:32.708257 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fqtkg" event={"ID":"87bd4688-7b81-4b36-b193-ecdeb8b8e7b1","Type":"ContainerDied","Data":"1f5f12e9fc5d2b562d60ce0c35f07a35073e3d9270b91ca284e0375203fd851e"} Feb 18 02:08:33 crc kubenswrapper[4858]: E0218 02:08:33.420676 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:08:33 crc kubenswrapper[4858]: I0218 02:08:33.720123 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fqtkg" event={"ID":"87bd4688-7b81-4b36-b193-ecdeb8b8e7b1","Type":"ContainerStarted","Data":"c290f5a9a246058c2cbc21077e8d6b876d478dcb0f2841376ffd8065d8d7f657"} Feb 18 02:08:34 crc kubenswrapper[4858]: I0218 02:08:34.738419 4858 generic.go:334] "Generic (PLEG): container finished" podID="87bd4688-7b81-4b36-b193-ecdeb8b8e7b1" containerID="c290f5a9a246058c2cbc21077e8d6b876d478dcb0f2841376ffd8065d8d7f657" exitCode=0 Feb 18 02:08:34 crc kubenswrapper[4858]: I0218 02:08:34.738933 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fqtkg" event={"ID":"87bd4688-7b81-4b36-b193-ecdeb8b8e7b1","Type":"ContainerDied","Data":"c290f5a9a246058c2cbc21077e8d6b876d478dcb0f2841376ffd8065d8d7f657"} Feb 18 02:08:35 crc kubenswrapper[4858]: I0218 02:08:35.750830 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fqtkg" event={"ID":"87bd4688-7b81-4b36-b193-ecdeb8b8e7b1","Type":"ContainerStarted","Data":"0e0954dd4a4045fe9f3c2411e76173fd4993deac961ab16c7d3dcb93950b76d7"} Feb 18 02:08:35 crc kubenswrapper[4858]: I0218 02:08:35.780592 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fqtkg" podStartSLOduration=3.321943246 podStartE2EDuration="5.78056883s" podCreationTimestamp="2026-02-18 02:08:30 +0000 UTC" firstStartedPulling="2026-02-18 02:08:32.71167518 +0000 UTC m=+5666.017511922" lastFinishedPulling="2026-02-18 02:08:35.170300744 +0000 UTC m=+5668.476137506" observedRunningTime="2026-02-18 02:08:35.771037632 +0000 UTC m=+5669.076874374" watchObservedRunningTime="2026-02-18 02:08:35.78056883 +0000 UTC m=+5669.086405562" Feb 18 02:08:41 crc kubenswrapper[4858]: I0218 02:08:41.009835 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fqtkg" Feb 18 02:08:41 crc kubenswrapper[4858]: I0218 02:08:41.012465 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fqtkg" Feb 18 02:08:41 crc kubenswrapper[4858]: I0218 02:08:41.118208 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fqtkg" Feb 18 02:08:41 crc kubenswrapper[4858]: I0218 02:08:41.900936 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fqtkg" Feb 18 02:08:41 crc kubenswrapper[4858]: I0218 02:08:41.967250 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fqtkg"] Feb 18 02:08:43 crc kubenswrapper[4858]: I0218 02:08:43.838699 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fqtkg" podUID="87bd4688-7b81-4b36-b193-ecdeb8b8e7b1" containerName="registry-server" containerID="cri-o://0e0954dd4a4045fe9f3c2411e76173fd4993deac961ab16c7d3dcb93950b76d7" gracePeriod=2 Feb 18 02:08:44 crc kubenswrapper[4858]: E0218 02:08:44.421746 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:08:44 crc kubenswrapper[4858]: I0218 02:08:44.531174 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fqtkg" Feb 18 02:08:44 crc kubenswrapper[4858]: I0218 02:08:44.658032 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87bd4688-7b81-4b36-b193-ecdeb8b8e7b1-catalog-content\") pod \"87bd4688-7b81-4b36-b193-ecdeb8b8e7b1\" (UID: \"87bd4688-7b81-4b36-b193-ecdeb8b8e7b1\") " Feb 18 02:08:44 crc kubenswrapper[4858]: I0218 02:08:44.658122 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87bd4688-7b81-4b36-b193-ecdeb8b8e7b1-utilities\") pod \"87bd4688-7b81-4b36-b193-ecdeb8b8e7b1\" (UID: \"87bd4688-7b81-4b36-b193-ecdeb8b8e7b1\") " Feb 18 02:08:44 crc kubenswrapper[4858]: I0218 02:08:44.658227 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szc25\" (UniqueName: \"kubernetes.io/projected/87bd4688-7b81-4b36-b193-ecdeb8b8e7b1-kube-api-access-szc25\") pod \"87bd4688-7b81-4b36-b193-ecdeb8b8e7b1\" (UID: \"87bd4688-7b81-4b36-b193-ecdeb8b8e7b1\") " Feb 18 02:08:44 crc kubenswrapper[4858]: I0218 02:08:44.659077 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87bd4688-7b81-4b36-b193-ecdeb8b8e7b1-utilities" (OuterVolumeSpecName: "utilities") pod "87bd4688-7b81-4b36-b193-ecdeb8b8e7b1" (UID: "87bd4688-7b81-4b36-b193-ecdeb8b8e7b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 02:08:44 crc kubenswrapper[4858]: I0218 02:08:44.679766 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87bd4688-7b81-4b36-b193-ecdeb8b8e7b1-kube-api-access-szc25" (OuterVolumeSpecName: "kube-api-access-szc25") pod "87bd4688-7b81-4b36-b193-ecdeb8b8e7b1" (UID: "87bd4688-7b81-4b36-b193-ecdeb8b8e7b1"). InnerVolumeSpecName "kube-api-access-szc25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 02:08:44 crc kubenswrapper[4858]: I0218 02:08:44.699422 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87bd4688-7b81-4b36-b193-ecdeb8b8e7b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87bd4688-7b81-4b36-b193-ecdeb8b8e7b1" (UID: "87bd4688-7b81-4b36-b193-ecdeb8b8e7b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 02:08:44 crc kubenswrapper[4858]: I0218 02:08:44.762198 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szc25\" (UniqueName: \"kubernetes.io/projected/87bd4688-7b81-4b36-b193-ecdeb8b8e7b1-kube-api-access-szc25\") on node \"crc\" DevicePath \"\"" Feb 18 02:08:44 crc kubenswrapper[4858]: I0218 02:08:44.762256 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87bd4688-7b81-4b36-b193-ecdeb8b8e7b1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 02:08:44 crc kubenswrapper[4858]: I0218 02:08:44.762281 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87bd4688-7b81-4b36-b193-ecdeb8b8e7b1-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 02:08:44 crc kubenswrapper[4858]: I0218 02:08:44.851772 4858 generic.go:334] "Generic (PLEG): container finished" podID="87bd4688-7b81-4b36-b193-ecdeb8b8e7b1" containerID="0e0954dd4a4045fe9f3c2411e76173fd4993deac961ab16c7d3dcb93950b76d7" exitCode=0 Feb 18 02:08:44 crc kubenswrapper[4858]: I0218 02:08:44.851826 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fqtkg" event={"ID":"87bd4688-7b81-4b36-b193-ecdeb8b8e7b1","Type":"ContainerDied","Data":"0e0954dd4a4045fe9f3c2411e76173fd4993deac961ab16c7d3dcb93950b76d7"} Feb 18 02:08:44 crc kubenswrapper[4858]: I0218 02:08:44.851853 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fqtkg" event={"ID":"87bd4688-7b81-4b36-b193-ecdeb8b8e7b1","Type":"ContainerDied","Data":"193408068b23c0f180a66b3391db99132f75d2cb7c5cbdd1e8655c6b857a445f"} Feb 18 02:08:44 crc kubenswrapper[4858]: I0218 02:08:44.851870 4858 scope.go:117] "RemoveContainer" containerID="0e0954dd4a4045fe9f3c2411e76173fd4993deac961ab16c7d3dcb93950b76d7" Feb 18 02:08:44 crc kubenswrapper[4858]: I0218 02:08:44.852032 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fqtkg" Feb 18 02:08:44 crc kubenswrapper[4858]: I0218 02:08:44.892175 4858 scope.go:117] "RemoveContainer" containerID="c290f5a9a246058c2cbc21077e8d6b876d478dcb0f2841376ffd8065d8d7f657" Feb 18 02:08:44 crc kubenswrapper[4858]: I0218 02:08:44.896126 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fqtkg"] Feb 18 02:08:44 crc kubenswrapper[4858]: I0218 02:08:44.908278 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fqtkg"] Feb 18 02:08:44 crc kubenswrapper[4858]: I0218 02:08:44.924988 4858 scope.go:117] "RemoveContainer" containerID="1f5f12e9fc5d2b562d60ce0c35f07a35073e3d9270b91ca284e0375203fd851e" Feb 18 02:08:44 crc kubenswrapper[4858]: I0218 02:08:44.974308 4858 scope.go:117] "RemoveContainer" containerID="0e0954dd4a4045fe9f3c2411e76173fd4993deac961ab16c7d3dcb93950b76d7" Feb 18 02:08:44 crc kubenswrapper[4858]: E0218 02:08:44.974921 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e0954dd4a4045fe9f3c2411e76173fd4993deac961ab16c7d3dcb93950b76d7\": container with ID starting with 0e0954dd4a4045fe9f3c2411e76173fd4993deac961ab16c7d3dcb93950b76d7 not found: ID does not exist" containerID="0e0954dd4a4045fe9f3c2411e76173fd4993deac961ab16c7d3dcb93950b76d7" Feb 18 02:08:44 crc kubenswrapper[4858]: I0218 02:08:44.975037 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e0954dd4a4045fe9f3c2411e76173fd4993deac961ab16c7d3dcb93950b76d7"} err="failed to get container status \"0e0954dd4a4045fe9f3c2411e76173fd4993deac961ab16c7d3dcb93950b76d7\": rpc error: code = NotFound desc = could not find container \"0e0954dd4a4045fe9f3c2411e76173fd4993deac961ab16c7d3dcb93950b76d7\": container with ID starting with 0e0954dd4a4045fe9f3c2411e76173fd4993deac961ab16c7d3dcb93950b76d7 not found: ID does not exist" Feb 18 02:08:44 crc kubenswrapper[4858]: I0218 02:08:44.975117 4858 scope.go:117] "RemoveContainer" containerID="c290f5a9a246058c2cbc21077e8d6b876d478dcb0f2841376ffd8065d8d7f657" Feb 18 02:08:44 crc kubenswrapper[4858]: E0218 02:08:44.976325 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c290f5a9a246058c2cbc21077e8d6b876d478dcb0f2841376ffd8065d8d7f657\": container with ID starting with c290f5a9a246058c2cbc21077e8d6b876d478dcb0f2841376ffd8065d8d7f657 not found: ID does not exist" containerID="c290f5a9a246058c2cbc21077e8d6b876d478dcb0f2841376ffd8065d8d7f657" Feb 18 02:08:44 crc kubenswrapper[4858]: I0218 02:08:44.976370 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c290f5a9a246058c2cbc21077e8d6b876d478dcb0f2841376ffd8065d8d7f657"} err="failed to get container status \"c290f5a9a246058c2cbc21077e8d6b876d478dcb0f2841376ffd8065d8d7f657\": rpc error: code = NotFound desc = could not find container \"c290f5a9a246058c2cbc21077e8d6b876d478dcb0f2841376ffd8065d8d7f657\": container with ID starting with c290f5a9a246058c2cbc21077e8d6b876d478dcb0f2841376ffd8065d8d7f657 not found: ID does not exist" Feb 18 02:08:44 crc kubenswrapper[4858]: I0218 02:08:44.976395 4858 scope.go:117] "RemoveContainer" containerID="1f5f12e9fc5d2b562d60ce0c35f07a35073e3d9270b91ca284e0375203fd851e" Feb 18 02:08:44 crc kubenswrapper[4858]: E0218 02:08:44.976915 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f5f12e9fc5d2b562d60ce0c35f07a35073e3d9270b91ca284e0375203fd851e\": container with ID starting with 1f5f12e9fc5d2b562d60ce0c35f07a35073e3d9270b91ca284e0375203fd851e not found: ID does not exist" containerID="1f5f12e9fc5d2b562d60ce0c35f07a35073e3d9270b91ca284e0375203fd851e" Feb 18 02:08:44 crc kubenswrapper[4858]: I0218 02:08:44.976934 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f5f12e9fc5d2b562d60ce0c35f07a35073e3d9270b91ca284e0375203fd851e"} err="failed to get container status \"1f5f12e9fc5d2b562d60ce0c35f07a35073e3d9270b91ca284e0375203fd851e\": rpc error: code = NotFound desc = could not find container \"1f5f12e9fc5d2b562d60ce0c35f07a35073e3d9270b91ca284e0375203fd851e\": container with ID starting with 1f5f12e9fc5d2b562d60ce0c35f07a35073e3d9270b91ca284e0375203fd851e not found: ID does not exist" Feb 18 02:08:45 crc kubenswrapper[4858]: E0218 02:08:45.421451 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:08:45 crc kubenswrapper[4858]: I0218 02:08:45.435997 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87bd4688-7b81-4b36-b193-ecdeb8b8e7b1" path="/var/lib/kubelet/pods/87bd4688-7b81-4b36-b193-ecdeb8b8e7b1/volumes" Feb 18 02:08:55 crc kubenswrapper[4858]: I0218 02:08:55.265533 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 02:08:55 crc kubenswrapper[4858]: I0218 02:08:55.267251 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 02:08:57 crc kubenswrapper[4858]: E0218 02:08:57.432911 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:09:00 crc kubenswrapper[4858]: E0218 02:09:00.423308 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:09:10 crc kubenswrapper[4858]: I0218 02:09:10.422793 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 02:09:10 crc kubenswrapper[4858]: E0218 02:09:10.549116 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 02:09:10 crc kubenswrapper[4858]: E0218 02:09:10.549234 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 18 02:09:10 crc kubenswrapper[4858]: E0218 02:09:10.549552 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t22t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-h2mps_openstack(8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 02:09:10 crc kubenswrapper[4858]: E0218 02:09:10.550863 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:09:11 crc kubenswrapper[4858]: E0218 02:09:11.420469 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:09:23 crc kubenswrapper[4858]: E0218 02:09:23.421205 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:09:25 crc kubenswrapper[4858]: I0218 02:09:25.267265 4858 patch_prober.go:28] interesting pod/machine-config-daemon-cbdbf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 02:09:25 crc kubenswrapper[4858]: I0218 02:09:25.267633 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 02:09:25 crc kubenswrapper[4858]: I0218 02:09:25.267686 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" Feb 18 02:09:25 crc kubenswrapper[4858]: I0218 02:09:25.268595 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ab9e2b13eeb887b40c3392f885b6d152add630b1aaaa81b85a81edd0399edc9f"} pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 02:09:25 crc kubenswrapper[4858]: I0218 02:09:25.268655 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" containerName="machine-config-daemon" containerID="cri-o://ab9e2b13eeb887b40c3392f885b6d152add630b1aaaa81b85a81edd0399edc9f" gracePeriod=600 Feb 18 02:09:25 crc kubenswrapper[4858]: E0218 02:09:25.407377 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 02:09:25 crc kubenswrapper[4858]: E0218 02:09:25.502884 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 02:09:25 crc kubenswrapper[4858]: E0218 02:09:25.502943 4858 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 02:09:25 crc kubenswrapper[4858]: E0218 02:09:25.503059 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n75h65dh56ch8chbh597h687h696h57bhdchcbhb6h9h648hb6h686h666h57ch557h55ch68ch76h686h5f7hb7hc6h68hdh67fh8bhd7hbcq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6qb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(1b28954c-8d35-4f43-a44b-307a56f6fff5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 02:09:25 crc kubenswrapper[4858]: E0218 02:09:25.504314 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:09:26 crc kubenswrapper[4858]: I0218 02:09:26.338021 4858 generic.go:334] "Generic (PLEG): container finished" podID="7172df49-6116-4968-a2b5-a1afb116568b" containerID="ab9e2b13eeb887b40c3392f885b6d152add630b1aaaa81b85a81edd0399edc9f" exitCode=0 Feb 18 02:09:26 crc kubenswrapper[4858]: I0218 02:09:26.338090 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" event={"ID":"7172df49-6116-4968-a2b5-a1afb116568b","Type":"ContainerDied","Data":"ab9e2b13eeb887b40c3392f885b6d152add630b1aaaa81b85a81edd0399edc9f"} Feb 18 02:09:26 crc kubenswrapper[4858]: I0218 02:09:26.338423 4858 scope.go:117] "RemoveContainer" containerID="faf01201cfa461d475f5939c0b4356cca132162d62c13a283f57f29c608f2176" Feb 18 02:09:26 crc kubenswrapper[4858]: I0218 02:09:26.339290 4858 scope.go:117] "RemoveContainer" containerID="ab9e2b13eeb887b40c3392f885b6d152add630b1aaaa81b85a81edd0399edc9f" Feb 18 02:09:26 crc kubenswrapper[4858]: E0218 02:09:26.339675 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 02:09:35 crc kubenswrapper[4858]: E0218 02:09:35.425159 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:09:36 crc kubenswrapper[4858]: E0218 02:09:36.422301 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1b28954c-8d35-4f43-a44b-307a56f6fff5" Feb 18 02:09:37 crc kubenswrapper[4858]: I0218 02:09:37.427700 4858 scope.go:117] "RemoveContainer" containerID="ab9e2b13eeb887b40c3392f885b6d152add630b1aaaa81b85a81edd0399edc9f" Feb 18 02:09:37 crc kubenswrapper[4858]: E0218 02:09:37.428229 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cbdbf_openshift-machine-config-operator(7172df49-6116-4968-a2b5-a1afb116568b)\"" pod="openshift-machine-config-operator/machine-config-daemon-cbdbf" podUID="7172df49-6116-4968-a2b5-a1afb116568b" Feb 18 02:09:39 crc kubenswrapper[4858]: I0218 02:09:39.744692 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5kcqc"] Feb 18 02:09:39 crc kubenswrapper[4858]: E0218 02:09:39.745469 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87bd4688-7b81-4b36-b193-ecdeb8b8e7b1" containerName="registry-server" Feb 18 02:09:39 crc kubenswrapper[4858]: I0218 02:09:39.745486 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="87bd4688-7b81-4b36-b193-ecdeb8b8e7b1" containerName="registry-server" Feb 18 02:09:39 crc kubenswrapper[4858]: E0218 02:09:39.745533 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87bd4688-7b81-4b36-b193-ecdeb8b8e7b1" containerName="extract-utilities" Feb 18 02:09:39 crc kubenswrapper[4858]: I0218 02:09:39.745545 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="87bd4688-7b81-4b36-b193-ecdeb8b8e7b1" containerName="extract-utilities" Feb 18 02:09:39 crc kubenswrapper[4858]: E0218 02:09:39.745564 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87bd4688-7b81-4b36-b193-ecdeb8b8e7b1" containerName="extract-content" Feb 18 02:09:39 crc kubenswrapper[4858]: I0218 02:09:39.745572 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="87bd4688-7b81-4b36-b193-ecdeb8b8e7b1" containerName="extract-content" Feb 18 02:09:39 crc kubenswrapper[4858]: I0218 02:09:39.745819 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="87bd4688-7b81-4b36-b193-ecdeb8b8e7b1" containerName="registry-server" Feb 18 02:09:39 crc kubenswrapper[4858]: I0218 02:09:39.747570 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5kcqc" Feb 18 02:09:39 crc kubenswrapper[4858]: I0218 02:09:39.773965 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5kcqc"] Feb 18 02:09:39 crc kubenswrapper[4858]: I0218 02:09:39.893138 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk4pb\" (UniqueName: \"kubernetes.io/projected/15e87b82-873f-41dd-a733-e391ef5b519e-kube-api-access-kk4pb\") pod \"redhat-operators-5kcqc\" (UID: \"15e87b82-873f-41dd-a733-e391ef5b519e\") " pod="openshift-marketplace/redhat-operators-5kcqc" Feb 18 02:09:39 crc kubenswrapper[4858]: I0218 02:09:39.893265 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15e87b82-873f-41dd-a733-e391ef5b519e-utilities\") pod \"redhat-operators-5kcqc\" (UID: \"15e87b82-873f-41dd-a733-e391ef5b519e\") " pod="openshift-marketplace/redhat-operators-5kcqc" Feb 18 02:09:39 crc kubenswrapper[4858]: I0218 02:09:39.893309 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15e87b82-873f-41dd-a733-e391ef5b519e-catalog-content\") pod \"redhat-operators-5kcqc\" (UID: \"15e87b82-873f-41dd-a733-e391ef5b519e\") " pod="openshift-marketplace/redhat-operators-5kcqc" Feb 18 02:09:39 crc kubenswrapper[4858]: I0218 02:09:39.995332 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15e87b82-873f-41dd-a733-e391ef5b519e-utilities\") pod \"redhat-operators-5kcqc\" (UID: \"15e87b82-873f-41dd-a733-e391ef5b519e\") " pod="openshift-marketplace/redhat-operators-5kcqc" Feb 18 02:09:39 crc kubenswrapper[4858]: I0218 02:09:39.995397 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15e87b82-873f-41dd-a733-e391ef5b519e-catalog-content\") pod \"redhat-operators-5kcqc\" (UID: \"15e87b82-873f-41dd-a733-e391ef5b519e\") " pod="openshift-marketplace/redhat-operators-5kcqc" Feb 18 02:09:39 crc kubenswrapper[4858]: I0218 02:09:39.995476 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk4pb\" (UniqueName: \"kubernetes.io/projected/15e87b82-873f-41dd-a733-e391ef5b519e-kube-api-access-kk4pb\") pod \"redhat-operators-5kcqc\" (UID: \"15e87b82-873f-41dd-a733-e391ef5b519e\") " pod="openshift-marketplace/redhat-operators-5kcqc" Feb 18 02:09:39 crc kubenswrapper[4858]: I0218 02:09:39.996259 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15e87b82-873f-41dd-a733-e391ef5b519e-utilities\") pod \"redhat-operators-5kcqc\" (UID: \"15e87b82-873f-41dd-a733-e391ef5b519e\") " pod="openshift-marketplace/redhat-operators-5kcqc" Feb 18 02:09:39 crc kubenswrapper[4858]: I0218 02:09:39.996467 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15e87b82-873f-41dd-a733-e391ef5b519e-catalog-content\") pod \"redhat-operators-5kcqc\" (UID: \"15e87b82-873f-41dd-a733-e391ef5b519e\") " pod="openshift-marketplace/redhat-operators-5kcqc" Feb 18 02:09:40 crc kubenswrapper[4858]: I0218 02:09:40.020834 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk4pb\" (UniqueName: \"kubernetes.io/projected/15e87b82-873f-41dd-a733-e391ef5b519e-kube-api-access-kk4pb\") pod \"redhat-operators-5kcqc\" (UID: \"15e87b82-873f-41dd-a733-e391ef5b519e\") " pod="openshift-marketplace/redhat-operators-5kcqc" Feb 18 02:09:40 crc kubenswrapper[4858]: I0218 02:09:40.100381 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5kcqc" Feb 18 02:09:40 crc kubenswrapper[4858]: I0218 02:09:40.688515 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5kcqc"] Feb 18 02:09:40 crc kubenswrapper[4858]: W0218 02:09:40.693531 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15e87b82_873f_41dd_a733_e391ef5b519e.slice/crio-979c19a6d3008924edec3ff4e6fb908ff811447b86db5a56cef425029bdb4c49 WatchSource:0}: Error finding container 979c19a6d3008924edec3ff4e6fb908ff811447b86db5a56cef425029bdb4c49: Status 404 returned error can't find the container with id 979c19a6d3008924edec3ff4e6fb908ff811447b86db5a56cef425029bdb4c49 Feb 18 02:09:41 crc kubenswrapper[4858]: I0218 02:09:41.536942 4858 generic.go:334] "Generic (PLEG): container finished" podID="15e87b82-873f-41dd-a733-e391ef5b519e" containerID="fed4ea2a741c2a311d193726220011a862df22377e437e772adbec26e6f5d84b" exitCode=0 Feb 18 02:09:41 crc kubenswrapper[4858]: I0218 02:09:41.537156 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kcqc" event={"ID":"15e87b82-873f-41dd-a733-e391ef5b519e","Type":"ContainerDied","Data":"fed4ea2a741c2a311d193726220011a862df22377e437e772adbec26e6f5d84b"} Feb 18 02:09:41 crc kubenswrapper[4858]: I0218 02:09:41.537180 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kcqc" event={"ID":"15e87b82-873f-41dd-a733-e391ef5b519e","Type":"ContainerStarted","Data":"979c19a6d3008924edec3ff4e6fb908ff811447b86db5a56cef425029bdb4c49"} Feb 18 02:09:42 crc kubenswrapper[4858]: I0218 02:09:42.548036 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kcqc" event={"ID":"15e87b82-873f-41dd-a733-e391ef5b519e","Type":"ContainerStarted","Data":"d9720ea7ed2b804c2b609e4991d84d450be1313e39ab8d703e25b8fcc7c58d55"} Feb 18 02:09:46 crc kubenswrapper[4858]: E0218 02:09:46.420826 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-h2mps" podUID="8d4b989d-a12e-4902-b4fa-c64e7d8e0fd9" Feb 18 02:09:46 crc kubenswrapper[4858]: I0218 02:09:46.583732 4858 generic.go:334] "Generic (PLEG): container finished" podID="15e87b82-873f-41dd-a733-e391ef5b519e" containerID="d9720ea7ed2b804c2b609e4991d84d450be1313e39ab8d703e25b8fcc7c58d55" exitCode=0 Feb 18 02:09:46 crc kubenswrapper[4858]: I0218 02:09:46.583770 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kcqc" event={"ID":"15e87b82-873f-41dd-a733-e391ef5b519e","Type":"ContainerDied","Data":"d9720ea7ed2b804c2b609e4991d84d450be1313e39ab8d703e25b8fcc7c58d55"}